diff --git a/package.json b/package.json index a620009..cedb2e4 100644 --- a/package.json +++ b/package.json @@ -1,7 +1,7 @@ { "name": "opsmaru-docs", "type": "module", - "version": "0.1.9", + "version": "0.1.10", "scripts": { "dev": "astro dev", "start": "astro dev", diff --git a/src/content/docs/application/astro/static.md b/src/content/docs/application/astro/static.md index 72e67e2..43c2219 100644 --- a/src/content/docs/application/astro/static.md +++ b/src/content/docs/application/astro/static.md @@ -3,7 +3,7 @@ title: Deploy an Astro static site description: This guide will walk you through deploying an astro application to OpsMaru. --- -Once you have provisioned your infrastructure by running it using terraform, the next step is getting your application deployed. This guide will show you how to configure your phoenix app for deployment. +Once you have provisioned your infrastructure by running it using a Terraform runner, the next step is getting your application deployed. This guide will show you how to configure your Phoenix app for deployment. :::note[Connect your repo] Make sure you have [connected your repository](/docs/application/connect-repository/) to OpsMaru. You should see your app in the `Existing Apps` tab. @@ -15,7 +15,7 @@ Click on `Configure` on the target app you want to deploy. ## Astro static site -For your Astro app select the `base-astro-static` pack. This build pack will build your astro site and serve it using the `caddy` web server. +For your Astro app, select the `base-astro-static` pack. This build pack will build your astro site and serve it using the `caddy` web server. ![app directory](../../../../assets/application/astro/build-pack-config.png) @@ -40,7 +40,7 @@ The static site config depends on a `Caddyfile`. You can add a `Caddyfile` with Once you click `Next` you'll see a preview and breakdown of the configuration that will be delivered to your application. -This configuration is meant as a starting point for your application. You can configure this further once it's submitted to your repository as a pull-request. +This configuration is meant as a starting point for your application. You can configure this further once it's submitted to your repository as a pull request. -Once you've merged the config and added the `Caddyfile` your static site should build and deploy. +Once you've merged the config and added the `Caddyfile`, your static site should build and deploy. diff --git a/src/content/docs/application/connect-repository.md b/src/content/docs/application/connect-repository.md index c096ba8..00d5379 100644 --- a/src/content/docs/application/connect-repository.md +++ b/src/content/docs/application/connect-repository.md @@ -16,6 +16,6 @@ Once you add the repositories you should be redirected back to OpsMaru and see a ![setup repository](../../../assets/application/setup-repository.png) -Once you click `Setup` on the repository you can begin configuring your application. Each framework / language can be configured differently choose the one you want to deploy. +Once you click `Setup` on the repository you can begin configuring your application. Each framework / language can be configured differently. Choose the one you want to deploy. diff --git a/src/content/docs/application/phoenix/clustering.md b/src/content/docs/application/phoenix/clustering.md index e132f3d..5e1ec35 100644 --- a/src/content/docs/application/phoenix/clustering.md +++ b/src/content/docs/application/phoenix/clustering.md @@ -5,13 +5,13 @@ description: This guide will show you how to setup automated clustering for your Elixir apps support clustering out of the box. Clustering can bring many advantages like: -+ Distributed Caching -+ Fault Tolerance ++ Distributed caching ++ Fault tolerance + Distributing workload across cluster ## libcluster -If you wish to use clustering in your application please make sure you have `:libcluster` installed as a dependency. We also have a [hex package](https://hex.pm/packages/libcluster_uplink) that will make it easy for clustering your application. +If you wish to use clustering in your application please make sure you have `:libcluster` installed as a dependency. We also have a [hex package](https://hex.pm/packages/libcluster_uplink) that will make clustering in your application easy. ```elixir title="mix.exs" def deps do diff --git a/src/content/docs/application/phoenix/database-certificate.md b/src/content/docs/application/phoenix/database-certificate.md index 50be1e0..3a7abdc 100644 --- a/src/content/docs/application/phoenix/database-certificate.md +++ b/src/content/docs/application/phoenix/database-certificate.md @@ -5,7 +5,7 @@ description: Enabling ssl connection for your database. We recommend you enable the `ssl` option in your ecto configuration. We expose the database certificate from the infrastructure provider to your application. This guide shows you how to configure ssl / tls connectivity for your database. -In `config/runtime.exs` under the `:prod` configuration we recommend the following code snippet. When you selected the `postgresql` addon for your elixir / phoenix application. It automatically exposes `DATABASE_CERT_PEM` to your application. +In `config/runtime.exs` under the `:prod` configuration, we recommend the following code snippet, when you've selected the `postgresql` add-on for your elixir / phoenix application. It automatically exposes `DATABASE_CERT_PEM` to your application. ```elixir title="config/runtime.exs" if config_env() == :prod do diff --git a/src/content/docs/application/rails/deployment.md b/src/content/docs/application/rails/deployment.md index c7dc2fa..a865445 100644 --- a/src/content/docs/application/rails/deployment.md +++ b/src/content/docs/application/rails/deployment.md @@ -3,7 +3,7 @@ title: Deploy a Rails App description: This guide will walk you through deploying a Rails application to OpsMaru. --- -Once you have provisioned your infrastructure by running it using terraform, the next step is getting your application deployed. This guide will show you how to configure your rails app for deployment. +Once you have provisioned your infrastructure by running it using a Terraform runner, the next step is getting your application deployed. This guide will show you how to configure your Rails app for deployment. :::note[Connect your repo] Make sure you have [connected your repository](/docs/application/connect-repository/) to OpsMaru. You should see your app in the `Existing Apps` tab. @@ -15,7 +15,7 @@ Click on `Configure` on the target app you want to deploy. ## Rails Build Pack -For your Rails app select the `base-rails` pack. This sets up all the system dependencies and then choose your add-ons. +For your Rails app select the `base-rails` pack. This sets up all the system dependencies. Then choose your add-ons. ![build pack config](../../../../assets/application/rails/build-pack-config.png) @@ -29,7 +29,7 @@ In this guide we're choosing the following: If you don't see the add-on or pack you want to use, please [reach out](https://github.com/orgs/upmaru/discussions). We will work towards getting the add-on you want into our platform. :::note[Alpine 3.18 / 3.19] -You may note that Alpine 3.18 and 3.19 has the same version of ruby which is 3.2.2, in 3.19 ruby is built with support for yjit. If you've tested and your application works with yjit choose alpine 3.19. +You may note that Alpine 3.18 and 3.19 have the same version of Ruby (which is 3.2.2). In 3.19, Ruby has support for yjit. If you've tested and your application works with yjit, choose alpine 3.19. ::: ## Configuration Generation @@ -38,5 +38,5 @@ Once you click `Next` you'll be able to see a preview and breakdown of the confi ![generated config](../../../../assets/application/rails/generated-config.png) -This configuration is meant as a starting point for your application. You can configure this further once it's submitted to your repository as a pull-request. +This configuration is meant as a starting point for your application. You can configure this further once it's submitted to your repository as a pull request. diff --git a/src/content/docs/application/useful-commands.md b/src/content/docs/application/useful-commands.md index 70f4db9..5bf55b8 100644 --- a/src/content/docs/application/useful-commands.md +++ b/src/content/docs/application/useful-commands.md @@ -1,6 +1,6 @@ --- title: Useful Commands -description: This page will show you some useful commands about your application. +description: This page will show you some useful commands for handling your application. --- There is something to note about the generated configuration. If you scroll to the `run` section of your configuration, you will see a list of `commands`. These are commands you can run. @@ -13,7 +13,7 @@ This guide assumes you've already [added your cluster](/docs/infrastructure/acce Make sure you run `lxc remote list` and check if you have the right cluster selected. ::: -## Jumping into the Container +## Jumping into the container Applications are provisioned inside LXD containers. You can jump into the container in the following way: @@ -41,7 +41,7 @@ rc-service app-name restart ## Migrations -If you use frameworks like Ruby on Rails or Phoenix you may have to work with migrations. They can be run in the following way. +If you use frameworks like Ruby on Rails or Phoenix, you may have to work with migrations. They can be run in the following way. ```bash rc-service app-name migrate @@ -57,7 +57,7 @@ rc-service app-name logs ## Console -If your application has console access you can use the following command. +If your application has console access, you can use the following command. ```bash rc-service app-name console @@ -69,4 +69,4 @@ If you wish to execute commands you can do so using the following: ```bash lxc exec container-name --project project.name -- rc-service app-name [command] -``` \ No newline at end of file +``` diff --git a/src/content/docs/build/conditional-deployment.md b/src/content/docs/build/conditional-deployment.md index 70649fe..8213626 100644 --- a/src/content/docs/build/conditional-deployment.md +++ b/src/content/docs/build/conditional-deployment.md @@ -3,7 +3,7 @@ title: Conditional Deployment description: This guide shows you how to customize your deployment workflow to only deploy when a previous workflow succeeds. --- -The default out-of-the-box `deployment.yml` will deploy every time you push to one of the 3 branches and configured in the `push` section: +The default out-of-the-box `deployment.yml` will deploy every time you push to one of the three branches configured in the `push` section: ```yaml title=".github/workflows/deployment.yml" on: @@ -14,7 +14,7 @@ on: - develop ``` -You can customize this based on your needs. You may have a CI workflow that runs tests and you only want to deploy when the tests pass. You can do this by configuring the `on` section to your `deployment.yml` file. +You can customize this based on your needs. For example, you may have a CI workflow that runs tests and you only want to deploy when the tests pass. You can do this by configuring the `on` section to your `deployment.yml` file. ```yaml title=".github/workflows/deployment.yml" on: @@ -39,7 +39,7 @@ jobs: ## Swapping out References -Given that the deployment will now be triggerd by another workflow instead of a push, the action will not have access to the correct `ref` and `sha`. We need to reference the `ref` and `sha` from the workflow that triggered the `deployment`. +Given that the deployment will now be triggered by another workflow instead of a push, the action will not have access to the correct `ref` and `sha`. We need to reference the `ref` and `sha` from the workflow that triggered the `deployment`. You will also need to replace the following values: diff --git a/src/content/docs/build/github-action.md b/src/content/docs/build/github-action.md index 4294f23..d404230 100644 --- a/src/content/docs/build/github-action.md +++ b/src/content/docs/build/github-action.md @@ -1,9 +1,9 @@ --- -title: PAKman Github Action +title: PAKman GitHub Action description: PAKman is a build system for building packages for your application. --- -PAKman is available to users as a github action. This guide will give you a breakdown of how PAKman works as a github action. When you use the Application setup in our app you will also get the deployment.yml configuration for your github actions. +PAKman is available to users as a GitHub Action. This guide will give you a breakdown of how PAKman works as a GitHub Action. When you use the Application setup in our app, you will also get the deployment.yml configuration for your GitHub Actions. ```yaml title=".github/workflows/deployment.yml" name: 'Deployment' @@ -89,11 +89,11 @@ jobs: ## Build -In PAKman v8 the build and deploy steps are separate. This is to allow for a retry of the deployment without having to rebuild the package. This is important because the build process can be time consuming and we want to avoid unnecessary rebuilds. +In PAKman v8, the build and deploy steps are separate. This is to allow for a retry of the deployment without having to rebuild the package. This is important because the build process can be time consuming and we want to avoid unnecessary rebuilds. ### Setup Pakman -The `upmaru/pakman@v8` github action uses the `setup-alpine` action underneath. This basically sets up alpine linux as chroot inside the default ubuntu runtime in github actions. You can customize the version of alpine using the `with` option. +The `upmaru/pakman@v8` GitHub Action uses the `setup-alpine` action underneath. This basically sets up Alpine Linux as chroot inside the default ubuntu runtime in GitHub Actions. You can customize the version of Alpine using the `with` option. ```yaml - name: Setup Pakman @@ -120,7 +120,7 @@ We also pass the command 2 environment variables `ABUILD_PRIVATE_KEY` and `ABUIL ### Build Package -In the next step the action will run `abuild` which is the tool used for building alpine packages. +In the next step, the Action will run `abuild`, which is the tool used for building Alpine packages. ```yaml @@ -135,7 +135,7 @@ In the next step the action will run `abuild` which is the tool used for buildin ### Upload Artifact -This is a standard github action. All it does is upload the built artifact. The artifact is stored on github's storage, and will be evicted based on your configuration in github. This step is important because it prevents rebuilding when we need to retry a deployment as you'll see in the next section. +This is a standard GitHub Action. All it does is upload the built artifact. The artifact is stored on GitHub's storage, and will be evicted based on your configuration in GitHub. This step is important because it prevents rebuilding when we need to retry a deployment, as you'll see in the next section. ```yaml - name: Upload Artifact @@ -161,7 +161,7 @@ We download the artifact from the previous step. ### Setup Pakman -Since we want to run pakman in alpine we'll setup PAKman again. This is not time consuming because PAKman utilizes caching on github action. If PAKman is already built it will simply load the cache. This is another feature of PAKman v8. +Since we want to run PAKman in Alpine we'll set up PAKman again. This is not time consuming because PAKman utilizes caching on GitHub Action. If PAKman is already built, it will simply load the cache. This is another feature of PAKman v8. ```yaml - name: Setup Pakman @@ -172,7 +172,7 @@ Since we want to run pakman in alpine we'll setup PAKman again. This is not time ### Merge Artifact -In this step we take all the artifacts built as separate X64 or in the future arm architecture and we merge them into a single zip file. +In this step we take all the artifacts built as separate X64 or, in the future, ARM architecture and we merge them into a single zip file. ```yaml - name: Merge Artifact diff --git a/src/content/docs/build/pakman.md b/src/content/docs/build/pakman.md index a2c3229..9807166 100644 --- a/src/content/docs/build/pakman.md +++ b/src/content/docs/build/pakman.md @@ -5,10 +5,10 @@ description: PAKman is a build system for building packages for your application PAKman is a build system for building packages for your application. It's a simple way to build packages for your application and deploy them to your infrastructure. -We recommend using the build pack selection from the UI to generate the bulk of the configuration but having an understanding of the generated file is also important. +We recommend using the build pack selection from the UI to generate the bulk of the configuration, but having an understanding of the generated file is also important. :::note[PAKman is Open Sourced] -PAKman is open sourced and available on [GitHub](https://github.com/upmaru/pakman). Feel free to let us know if you have any suggestions. +PAKman is open sourced and available on [GitHub](https://github.com/upmaru/pakman). Feel free to let us know if you have any feedback or suggestions. There is also a [blog post](https://zacksiri.dev/posts/why-i-created-pakman) by the author (who also happens to be one of the founders of OpsMaru) about how all this works. ::: @@ -50,13 +50,13 @@ dependencies: ### Stack -The stack is where you choose which version of alpine you use. Stacks are similar to heroku's stack. The only difference is heroku's stack is based on ubuntu. +The stack is where you choose which version of alpine you use. Stacks are similar to Heroku's stack. The only difference is Heroku's stack is based on ubuntu. ```yaml title="instellar.yml" stack: alpine/3.18 ``` -Each stack has different version of dependencies. You can choose which stack to use depending on which version of your dependency you wish to use. +Each stack has a different version of dependencies. You can choose which stack to use depending on which version of your dependency you wish to use. ![setup repository](../../../assets/build/runtime-versions.png) @@ -86,11 +86,11 @@ The destinations directive tells the build system that the files in the destinat Hooks are commands that are run at different stages of the installation process. There are 4 main stages in the lifecycle. -+ post-install - These commands run after the package is installed, in the example below we add the app `devspace` to the default application to run when the OS starts. That way if we restart the container the application automatically starts. We also run `rc-service devspace migrate`. This basically runs database migration. ++ post-install - These commands run after the package is installed. In the example below, we add the app `devspace` to the default application to run when the OS starts. That way if we restart the container the application automatically starts. We also run `rc-service devspace migrate`. This basically runs database migration. + pre-upgrade - These commands run before upgrade. In this case we stop the application before the upgrade. -+ post-upgrade - These commands run after the package is upgraded. In this case we want to run migrations and then start the new upgraded version of the app. ++ post-upgrade - These commands run after the package is upgraded. In this case we want to run migrations and then start the new, upgraded version of the app. + post-deinstall - These commands run after the package is removed. Generally it is not used since we just delete the container and provision a new one if something goes wrong. @@ -112,11 +112,11 @@ hook: ### Run -The `run` section describes how the application runs. It provides commands that tell the system how to run the application. In the example below you will see the `commands` sectiona nd the `services` section. +The `run` section describes how the application runs. It provides commands that tell the system how to run the application. In the example below, you will see the `commands` section and the `services` section. + services - These are commands that start the service of your application. In the example below we're starting a rails application by calling the `rails` binary followed with the `server` which essentially becomes `rails server`. In this case we also have a background worker called `good-job`. The below configuration essentially runs `bundle exec good_job start` -+ commands - These are commands that run as one time and are not long lived processes. For example you can access the rails console by running `rc-service devspace console` or access the logs by using `rc-service devspace logs`. ++ commands - These are commands that run one time and are not long lived processes. For example you can access the rails console by running `rc-service devspace console` or access the logs by using `rc-service devspace logs`. Doing things this way enables us to `normalize` the experience. No matter what language or framework you use, you can use `rc-service [app-name] [command]` to run the commands you need. @@ -151,14 +151,14 @@ run: ### Kits -Kits are something specific to `instellar` our core engine that orchestrates deployments. Kits provide a way to configure things specific to your application and answers the following questions: +Kits are something specific to `instellar`, our core engine that orchestrates deployments. Kits provide a way to configure things specific to your application. Kits answer the following questions: + Which port of my service to expose? + What are the environment variables my app depends on? + Which environment variables are automatically provisioned? + Which environment variables should be exposed to the application? + Which environment variables are required / options? -+ What are the application specific configuration required for resources? ++ What are the application specific configurations required for resources? In the example below we have a `web` kit, which is the main kit. We also have a `fork` which is the `good-job` process. diff --git a/src/content/docs/infrastructure/digitalocean/credentials.md b/src/content/docs/infrastructure/digitalocean/credentials.md index 464cb8b..73fe2c3 100644 --- a/src/content/docs/infrastructure/digitalocean/credentials.md +++ b/src/content/docs/infrastructure/digitalocean/credentials.md @@ -3,10 +3,10 @@ title: Credentials description: Retrieving your digitalocean credentials. --- -To provision resources with digitalocean you will need 2 types of credentials. +To provision resources with DigitalOcean you will need 2 types of credentials. -+ Spaces Token - used for setting up spaces bucket -+ API Token - used for setting up compute, databases and networking. ++ Spaces Token - used for setting up spaces buckets ++ API Token - used for setting up compute, databases and networking ## API Token @@ -18,17 +18,17 @@ Click on the `Generate New Token` button. ![api section](../../../../assets/infrastructure/digitalocean/generate-new-token.png) -Give your token a name you can identify with, be sure to check the `Write` permission, and click `Generate Token`. You can choose an expiration, but you need to make sure to replace the token once it expires. +Give your token a name you can identify it with. Be sure to check the `Write` permission and click `Generate Token`. You can choose an expiration, but you need to make sure to replace the token once it expires. ![api section](../../../../assets/infrastructure/digitalocean/generate-new-token-form.png) -Once you create the token you will see it once, copy it and store it somewhere safe. +Once you create the token you will see it only once. Copy it and store it somewhere safe. ![generated api token](../../../../assets/infrastructure/digitalocean/generated-api-token.png) ## Spaces Token -To get the `spaces token` click on the `Spaces Keys` tab. +To get the `spaces token`, click on the `Spaces Keys` tab. ![spaces keys](../../../../assets/infrastructure/digitalocean/spaces-keys-tab.png) @@ -38,20 +38,20 @@ Click on the `Generate new key` button, give the key a name and click `Create ac ## Credential Management -In the terraform configuration generated by our platform, the API token is referenced as `do_token`. +In the Terraform configuration generated by our platform, the API token is referenced as `do_token`. -The spaces access key is referneced as `do_access_key` and `do_secret_key`. +The spaces access key is referenced as `do_access_key` and `do_secret_key`. :::caution[Credential management] -Do not share these credentials to anyone, please store it safely: +Do not share these credentials with anyone, please store them safely. + Do not check them into source control. -+ If you use terraform cloud user variable sets to store them and check the `sensitive` option. ++ If you use Terraform Cloud, use variable sets to store them and check the `sensitive` option. ::: -### Further Detail +### Further Details -We have a video showing you how to store your credentials inside terraform cloud. +We have a video showing you how to store your credentials inside Terraform Cloud.
diff --git a/src/content/docs/operation/lite-vs-pro.md b/src/content/docs/operation/lite-vs-pro.md index 800350e..2b03769 100644 --- a/src/content/docs/operation/lite-vs-pro.md +++ b/src/content/docs/operation/lite-vs-pro.md @@ -3,19 +3,19 @@ title: Lite vs Pro description: Uplink has 2 modes lite and pro. Here we take a look at the difference. --- -Uplink comes in 2 modes. Lite and Pro. Here we take a look at the difference. +Uplink comes in 2 modes: Lite and Pro. Here we take a look at the difference. ## Lite -When you setup your cluster you will be setup with the `lite` version of uplink. This enables easy provisioning and will run the `uplink-caddy` router on a single node on your cluster. It doesn't use an external database (it has it's own internal postgresql instance). +When you set up your cluster, you will be set up with the `lite` version of Uplink. This enables easy provisioning and will run the `uplink-caddy` router on a single node on your cluster. It doesn't use an external database (it has it's own internal postgresql instance). ## Pro -The `pro` version of uplink requires an external postgresql database. It can also run in more than single node on your cluster. This means the `pro` version provides redundancy for `uplink-caddy` which serves traffic to your apps. +The `pro` version of Uplink requires an external postgresql database. It can also run in more than a single node on your cluster. This means the `pro` version provides redundancy for `uplink-caddy` which serves traffic to your apps. ### Future Plans -Given that uplink `pro` is backed by an external database this opens up many possibilities for what we can achieve. +Given that Uplink `pro` is backed by an external database this opens up many possibilities for what we can achieve. -+ **Vault** - Will enable `enterprise` users to not store any secrets on OpsMaru cloud. With `vault` enabled on their cluster through uplink `pro` secret management becomes completely isolated to the customer's infrastructure. OpsMaru only serves as the front-end for adding / updating secrets, users have the option of 'flushing' secrets from OpsMaru, leaving the data only on their intenral vault. \ No newline at end of file ++ **Vault** - `Vault` will let `enterprise` users avoid storing any secrets on OpsMaru cloud. With `vault` enabled on their cluster through Uplink `pro`, secret management becomes completely isolated to the customer's infrastructure. OpsMaru only serves as the front-end for adding / updating secrets and users have the option of 'flushing' secrets from OpsMaru, leaving the data only on their internal vault / infrastructure. diff --git a/src/content/docs/operation/uplink.md b/src/content/docs/operation/uplink.md index 052ce75..703f5ff 100644 --- a/src/content/docs/operation/uplink.md +++ b/src/content/docs/operation/uplink.md @@ -3,30 +3,30 @@ title: Uplink Engine description: Uplink operates your cluster and application. --- -Uplink is a tool that operates your cluster and application. When you bootstrap your platform using the `Infrastructure Builder` uplink is the first container that runs. It provides functinality that keeps your cluster running smoothly and automates away most of the complexities in deploying and upgrading your application. +Uplink is a tool that operates your cluster and application. When you bootstrap your platform using the `Infrastructure Builder`, Uplink is the first container that runs. It provides functinality that keeps your cluster running smoothly and automates away most of the complexities in deploying and upgrading your application. :::note[Uplink is Open Sourced] -Uplink is open sourced and available on [GitHub](https://github.com/upmaru/uplink). Feel free to let us know if you have any suggestions. +Uplink is open sourced and available on [GitHub](https://github.com/upmaru/uplink). Feel free to let us know if you have any feedback or suggestions. -There is also a [blog post](https://zacksiri.dev/posts/self-provisioning-ecto-based-application) about some of the parts of uplink work. +There is also a [blog post](https://zacksiri.dev/posts/self-provisioning-ecto-based-application) about how some of the parts of uplink work. ::: -Below are some of the features that uplink provide. +Below are some of the features that Uplink provides. ## Managing Caddy -Inside the uplink container there are 2 main processes. +Inside the Uplink container there are 2 main processes. -+ `uplink` - This is the uplink app. -+ `uplink-caddy` - This is a [specially built](https://github.com/upmaru/uplink-caddy) version of caddy that serves your application. ++ `uplink` - This is the Uplink app. ++ `uplink-caddy` - This is a [specially built](https://github.com/upmaru/uplink-caddy) version of Caddy that serves your application. -It's responsible for managing SSL certificates and serving your application. Uplink communicates with OpsMaru and ensures all the configuration for caddy are automatically updated when something changes. +It's responsible for managing SSL certificates and serving your application. Uplink communicates with OpsMaru and ensures all the configurations for Caddy are automatically updated when something changes. ## Hosting Packages -We've mentioned before that your application / source code never touches our platform. This means everything is served from within your cluster. When a deployment happens Uplink will automatically download the built archive of your application and set it up and trigger an upgrade on the running containers. When application containers are bootstrapped or upgrade you can see evidence of this in the logs. +We've mentioned before that your application / source code never touches our platform. This means everything is served from within your cluster. When a deployment happens, Uplink will automatically download the built archive of your application, set it up, and trigger an upgrade on the running containers. When application containers are bootstrapped or upgrade, you can see evidence of this in the logs. -Here is an example +Here is an example: ```shell fetch https://dl-cdn.alpinelinux.org/alpine/v3.19/main/x86_64/APKINDEX.tar.gz @@ -47,18 +47,16 @@ Executing opsmaru-docs-0.1.3-r41.post-upgrade OK: 76 MiB in 53 packages ``` -In the example log above notice that the package is being hosted in the uplink instance that's running inside your cluster `http://uplink-890bc3f9-01:4080/distribution/develop/upmaru/opsmaru-docs` +In the example log above, notice that the package is being hosted in the Uplink instance that's running inside your cluster `http://uplink-890bc3f9-01:4080/distribution/develop/upmaru/opsmaru-docs`. ## Application Lifecycle Uplink also orchestrates and manages the lifecycle of your application. It's responsible for deploying your application, upgrading it, scaling it, and ensuring it's always running. -If anything fails during an upgrade uplink will provision a new container running your application automatically. +If anything fails during an upgrade, Uplink will provision a new container running your application automatically. ![pakman action cache](../../../assets/operation/uplink-upgrade.png) ## Service Discovery -Uplink provides endpoints that can be used to discover services running in your cluster. This means your application can poll the endpoint to figure out other services running in the cluster that may be relevant. For example [elixir / phoenix clustering](/docs/application/phoenix/clustering/) utilizes service discovery to connect to other nodes. - - +Uplink provides endpoints that can be used to discover services running in your cluster. This means your application can poll the endpoint to figure out which other services running in the cluster may be relevant. For example [elixir / phoenix clustering](/docs/application/phoenix/clustering/) utilizes service discovery to connect to other nodes. diff --git a/src/content/docs/trouble-shooting/platform-provisioning.md b/src/content/docs/trouble-shooting/platform-provisioning.md index 78da43a..2d26670 100644 --- a/src/content/docs/trouble-shooting/platform-provisioning.md +++ b/src/content/docs/trouble-shooting/platform-provisioning.md @@ -3,15 +3,17 @@ title: Platform provisioning description: Problems that crop up during cluster creation. --- -OpsMaru provisions cluster on cloud providers. These are environments we do not control. There are many components and potentially many things can go wrong during the process. While we do our best to ensure a smooth operation there are things that are out of our control these are a list of things that can possibly happen and how to fix them. +OpsMaru provisions cluster on cloud providers. These are environments we do not control. There are many components and potentially many things can go wrong during the process. While we do our best to ensure a smooth operation, there are things that are out of our control. + +These are a list of things that can possibly happen and how to fix them. ## API Errors -If you are seeing some kind of API error on the cloud provider, make sure that your credentials are correctly setup and working. Your errors may also be related to bad input values. +If you are seeing some kind of API error from the cloud provider, make sure that your credentials are correctly setup and working. Your errors may also be related to bad input values. ### Parameter Error -Sometimes we can input the wrong value. DigitalOcean for example do not support certain instance types in different regions. All you have to do to resolve such issues is to select a different instance type. +Sometimes we can input the wrong value. DigitalOcean, for example, does not support certain instance types in different regions. All you have to do to resolve such issues is to select a different instance type. ![parameter error](../../../assets/trouble-shooting/digitalocean-bastion-size-error.png) @@ -19,7 +21,7 @@ Sometimes we can input the wrong value. DigitalOcean for example do not support Your cluster may have been successfully provisioned, however sometimes there are networking issues that can prevent your installation from being deployed. OpsMaru clusters depend on internal networking communications, and in cases where there are internal networking issues, this can cause failure when booting up your application container. -You can resolve these issues easily by [sshing into](/docs/infrastructure/accessing-your-cluster/) your node and reboot it. +You can resolve these issues easily by [sshing into](/docs/infrastructure/accessing-your-cluster/) your node and rebooting it.