diff --git a/.github/styles/HouseStyle/tech-terms/general.txt b/.github/styles/HouseStyle/tech-terms/general.txt index e7b9d206e..885cb10fb 100644 --- a/.github/styles/HouseStyle/tech-terms/general.txt +++ b/.github/styles/HouseStyle/tech-terms/general.txt @@ -524,4 +524,7 @@ predesigned redistributable precompiled Cronhub -subworkflows \ No newline at end of file +subworkflows +triaging +rollout +Rollout \ No newline at end of file diff --git a/.github/styles/HouseStyle/tech-terms/git.txt b/.github/styles/HouseStyle/tech-terms/git.txt index dcb75ad4d..bc7964333 100644 --- a/.github/styles/HouseStyle/tech-terms/git.txt +++ b/.github/styles/HouseStyle/tech-terms/git.txt @@ -14,4 +14,6 @@ Checkmarx lfs packagecloud -Packagecloud \ No newline at end of file +Packagecloud + +unstage \ No newline at end of file diff --git a/blog/_posts/2023-09-28-advanced-git-commands-2.md b/blog/_posts/2023-09-28-advanced-git-commands-2.md new file mode 100644 index 000000000..2f3011abf --- /dev/null +++ b/blog/_posts/2023-09-28-advanced-git-commands-2.md @@ -0,0 +1,341 @@ +--- +title: "10 Advanced Git Commands Part2" +categories: + - Tutorials +toc: true +author: Temitope Oyedele +editor: Ubaydah Abdulwasiu + +internal-links: + - advance commands in git + - github commands + - advance git commands + - 10 new git commands +--- + +The last [article](https://earthly.dev/blog/advanced-git-commands/) discussed ten advanced Git commands you should know as a developer. + +In this article, we take a look at ten more advanced commands including bisect, reset, and archive. + +## Git Remote + +[Git remote](https://git-scm.com/docs/git-remote) can be used to list, add, remove, and update remote repositories. Git remote allows you to create shortcuts to remote repositories. These shortcuts are called "remote names". You can use remote names to refer to remote repositories in other Git commands. Think of it as a bookmark to other directories instead of links. The diagram below further explains the git remote. + +
+![git remote command]({{site.images}}{{page.slug}}/n97KMiO.jpg) +
+ +The diagram above shows two remote connections from `my repo` into the `main` repo and another developer's repo. The remote names for these connections are `main` and `Atello`. +Instead of referencing the full URLs of these remote repositories in other Git commands, you can use the remote names `origin` and `Atello`. + +To see the remote names of you current repository, use this command: + +~~~{.bash caption=">_"} +git remote +~~~ + +
+![Showing remotes names associated with the repository]({{site.images}}{{page.slug}}/x0ODzpJ.png) +
+ +The output shows the remote names associated with season-of_docs repository which are `oyedeletemitope` and `origin` are remote names associated with this repository. + +To view more detailed information about the remotes, including their URLs, you can use: + +~~~{.bash caption=">_"} +git remote -v +~~~ + +This will show you each remote's name, the repository's URL, and the fetch and push URLs like so: + +
+![Result]({{site.images}}{{page.slug}}/x0ODzpJ.png) +
+ +To connect your local repository with a remote repository, you use the Git remote add command: + +~~~{.bash caption=">_"} +git remote add +~~~ + +To remove a remote, you can run the following command: + +~~~{.bash caption=">_"} +git remote remove [name] +~~~ + +To update the URL of a remote, you can run the following command: + +~~~{.bash caption=">_"} +git remote set-url [name] [new_url] +~~~ + +To incorporate changes from a remote repository into your local repository, you can use the `git pull` command: + +~~~{.bash caption=">_"} +git pull +~~~ + +## Git Bisect + +[git bisect](https://git-scm.com/docs/git-bisect) is a really powerful tool to help you quickly find a specific commit. This is often used to help you find where a bug or issue was introduced into the code base. It uses a binary search algorithm to narrow down the range of commits until it finds the exact commit that caused the problem. + +To use `git bisect`, you start with an initial known good commit (a commit where the bug is not present) and an initial known bad commit (a commit where the bug is present). Git will then perform a binary search, automatically selecting a commit between the good and bad commits for you to test. + +Based on the outcome of your tests, you provide feedback to Git using either `git bisect good` or `git bisect bad` indicating whether the bug is present or not in the selected commit. Git will then automatically choose the next commit to test, effectively halving the commit range each time. + +Git continues this process, automatically selecting commits for you to test and adjusting the search range until it pinpoints the specific commit that introduced the bug. + +
+![How the git bisect works]({{site.images}}{{page.slug}}/vNDp0lX.jpg) +
+ +~~~{.bash caption=">_"} +git bisect start +~~~ + +This will start the bisect process and check out the commit halfway between the good and bad commits. + +You will then need to test the commit that is checked out. If the commit does not have the bug, you can mark it as "good". + +Using this command: + +~~~{.bash caption=">_"} +git bisect good +~~~ + +If the commit does have the bug, you can mark it as "bad" using this command: + +~~~{.bash caption=">_"} +git bisect bad +~~~ + +Git bisect will check out the commit halfway between the good and bad commits, and you will repeat the process. + +This process will continue until git bisect narrows down the range of commits to a single commit. This commit is the one that introduced the bug. + +## Git Fetch + +[git fetch](https://git-scm.com/docs/git-fetch) is a command used in Git to retrieve the latest changes from a remote repository without automatically merging them into your local branch. It allows you to bring your local repository up to date with the remote repository's changes without modifying your current working state. + +Here's how git fetch works: + +~~~{.bash caption=">_"} +git fetch +~~~ + +When you run `git fetch`, Git contacts the remote repository specified by `` (the default remote name assigned is usually "origin.") and retrieves all the new branches, commits, and other objects that exist in the remote repository but are not present in your local repository. + +
+![git fetch origin]({{site.images}}{{page.slug}}/8OLObm9.png) +
+ +Unlike `git pull`, which automatically merges the fetched changes into your current branch,`git fetch` does not modify your working branch or introduce any changes. Instead, it updates the remote-tracking branches to allow you to inspect the fetched changes and decide how to integrate them later. + +## Git Checkout + +[git checkout](https://git-scm.com/docs/git-checkout) is a versatile command in Git that allows you to navigate between branches, switch to a specific commit, or restore files to a previous state. It is a fundamental command for managing your Git repository's working directory. + +~~~{.bash caption=">_"} +git checkout +~~~ + +Let's see some use cases of `git checkout`: + +### Switching To an Existing Branch + +Let's say you have an existing branch called `another-branch` and want to switch to that branch. You can use the following command: + +~~~{.bash caption=">_"} +git checkout another-branch +~~~ + +This command will switch your working directory to the `another-branch` branch. + +### Creating and Switching to a New Branch + +Suppose you want to create a new branch called `test` and switch to it. You can use the `-b` flag with `git checkout` to create and switch to the new branch in one step. + +~~~{.bash caption=">_"} +git checkout -b test +~~~ + +This command will create and switch to the `test` branch based on your current branch (usually the branch you are currently on). + +### Checking Out a Specific Commit + +Sometimes, if you need to examine or work on a specific commit, you can use `git checkout` to switch to that commit. For example + +~~~{.bash caption=">_"} +git checkout +~~~ + +This command will place you in a detached `HEAD` state, where you are not on a specific branch but directly on the commit. + +## Git Branch + +The [git branch](https://git-scm.com/docs/git-branch) command is used in Git to manage branches within a repository. It allows you to create, list, delete, and perform various operations related to branches. Here are some common use cases and syntax for the git branch: + +~~~{.bash caption=">_"} +git branch +~~~ + +This command will display a list of branches with an asterisk (*) indicating the current branch you are on: + +
+![Displaying the list of branches]({{site.images}}{{page.slug}}/Ktwgmn5.png) +
+ +To create a new branch, you can use the following syntax: + +~~~{.bash caption=">_"} +git branch +~~~ + +For example, to create a branch named "test": + +~~~{.bash caption=">_"} +git branch test +~~~ + +## Git Reset + +The [git reset](https://git-scm.com/docs/git-reset) command is used to manipulate the commit history by moving the branch pointer to a specific commit. It allows you to undo changes, unstage files, and modify the state of the repository. The `git reset` command has different options and can be used in various ways. + +
+![git reset]({{site.images}}{{page.slug}}/gRSZul3.jpg) +
+ +Some of these commands work side by side with each other. A typical example is the `git branch` used alongside the `git checkout` command and the `git bisect` used alongside the `git reset` command. + +The `git reset` command has three main modes: soft, mixed, and hard. + +### Soft Reset + +A soft reset does not change the working directory or the index. It only updates the `HEAD` pointer to point to a different commit. This means you can still see the changes you have made, but they won't be staged for commit. + +Imagine having a repository and you wanted to perform a soft reset. You would use the following command: + +~~~{.bash caption=">_"} +git reset --soft 8c019e7 +~~~ + +Let's see what happens with git status: + +
+![git reset soft]({{site.images}}{{page.slug}}/pz86Pau.png) +
+ +You'll notice that in the git status the file has been indexed and ready to be committed. + +### Mixed Reset + +A mixed reset is similar to a soft reset but also removes the changes you have staged for commit. This means that you will not be able to see the changes that you have made, but they will still be in your working directory. + +To perform a mixed reset, you would use the following command: + +~~~{.bash caption=">_"} +git reset --mixed 8c019e7 +~~~ + +You'll notice it says `unstaged changes`. + +
+![git reset mixed]({{site.images}}{{page.slug}}/8BijHrc.png) +
+ +And if we check the status we can see that we have a modified file which we can add and make a commit if we wanted to. + +### Hard Reset + +A hard reset is the most destructive mode of `git reset`. It removes the changes you have made in the working directory and the index. This means that you will lose all of your changes, and you will be reverted to the state of the commit that you specified. + +To perform a hard reset, you would use the following command: + +~~~{.bash caption=">_"} +git reset --hard 8c019e7 +~~~ + +
+![git reset hard]({{site.images}}{{page.slug}}/Vld0GSX.png) +
+ +You should notice that the changes disappear in the file itself and the head is now pointing at that commit. If we run a git status we'll see that there's nothing to commit. + +
+![git status]({{site.images}}{{page.slug}}/oE08Cxg.png) +
+ +## Git Archive + +The [git archive](https://git-scm.com/docs/git-archive) command in Git allows you to create a compressed archive (e.g., ZIP or TAR) of a specific commit, a branch, or the entire repository. It extracts the repository's contents at a particular state without including Git-specific metadata. This command is useful when exporting a clean snapshot of your project for distribution or deployment. Here's how to use `git archive`: + +~~~{.bash caption=">_"} +git archive --format= --output= +~~~ + +Breaking down the options and parameters: + +- `--format=` specifies the format of the archive. It can be one of the following: zip, tar, tar.gz, or tar.bz2. Choose the appropriate format based on your requirements. + +- `--output=` specifies the name and location of the output file. Provide the desired filename and the appropriate extension for the chosen archive format. + +- `` represents the commit hash, branch name, or any other valid reference that identifies the desired state of the repository. + +## Git Help + +The [git help](https://git-scm.com/docs/git-help) command in Git is used to access the built-in documentation and get help on various Git commands and topics. It provides information about Git commands, concepts, configuration options, and more. Let's check out several ways in which we can use the git help command. + +### For General Git Help + +To get a list of common Git commands and a brief description of each, you can run: + +~~~{.bash caption=">_"} +git help +~~~ + +This will display the main help page, which provides an overview of available commands and links to more specific documentation. + +### Help for a Specific Command + +Shown in the image below is the result of the git help command, which shows command git commands for various situations and also subcommands also: + +
+![git help command]({{site.images}}{{page.slug}}/ca6733J.png) +
+ +If you want detailed information about a specific Git command, you can use `git help` followed by the command name. For example: + +~~~{.bash caption=">_"} +git help commit +~~~ + +This will display the documentation for the commit command, including its usage, options, and examples. + +### Search for Help Topics + +You can search for help topics related to a specific keyword or topic using the`-g` flag. For instance: + +~~~{.bash caption=">_"} +git help -g branch +~~~ + +This will show a list of help topics related to the keyword "branch", such as branch management, branch creation, and more. + +### Opening the Git Manual in a Web Browser + +If you prefer to access the Git documentation in a web browser, you can use: + +~~~{.bash caption=">_"} +git help --web +~~~ + +This will open the Git manual in your default web browser, allowing you to navigate and search the documentation more conveniently. + +## Conclusion + +Exploring and mastering advanced Git commands can greatly benefit developers by streamlining their version control workflows. + +By delving into these powerful commands, developers gain the ability to navigate complex branching strategies, handle large-scale codebases more efficiently, and collaborate effectively with team members. Also, with these tools, developers can confidently tackle intricate Git scenarios, ensuring smoother code management and facilitating a more productive development process. + +{% include_html cta/bottom-cta.html %} diff --git a/blog/_posts/2023-09-28-deployment-strategies-kubernetes.md b/blog/_posts/2023-09-28-deployment-strategies-kubernetes.md new file mode 100644 index 000000000..ef5eecc9a --- /dev/null +++ b/blog/_posts/2023-09-28-deployment-strategies-kubernetes.md @@ -0,0 +1,535 @@ +--- +title: "Deployment Strategies in Kubernetes" +categories: + - Tutorials +toc: true +author: Muhammad Badawy + +internal-links: + - deployment strategies + - strategies for deployment in kubernetes + - deployment in kubernetes + - how to do deployment in kubernetes +--- + +Kubernetes is a container orchestration platform that helps you deploy, manage, and scale containerized applications. One of the key features of Kubernetes is its ability to choose between different deployment strategies. With the right strategy you can easily roll out new versions of your application based on business needs and application requirements. + +Each strategy has its own advantages and disadvantages. So how do you choose the right one? In this article, we will discuss the different deployment strategies available in Kubernetes and the pros and cons for each deployment. We will also provide examples of how to implement each strategy. + +## Deployment Strategies in Kubernetes + +![Strategies]({{site.images}}{{page.slug}}/strategy.png)\ + +In Kubernetes, a deployment strategy is an approach to managing the rollout and updates of applications in a cluster. It defines how changes to the application are applied, ensuring a smooth transition with minimal disruption to the application's availability. + +Kubernetes provides various deployment strategies, each designed to meet different requirements and scenarios. + +### Prerequisites + +* Basic understanding of Kubernetes and [its Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/). +* Basic understanding of [Kubernetes services](https://kubernetes.io/docs/concepts/services-networking/service/) +* Kubernetes environment up and running. +* [Kubectl](https://kubernetes.io/docs/tasks/tools/) installation. + +## Rolling Deployment in Kubernetes + +A rolling deployment is the default deployment strategy in Kubernetes. It updates your application gradually, one pod at a time. This means that there is no downtime during the deployment, as the old pods are still running while the new pods are being created. + +This type of Kubernetes deployment comes `out of the box`. Kubernetes provides a feature called `Deployment` to manage rolling updates. Here's how it's implemented: + +Step 1: Create a Deployment +To begin, you define a Kubernetes Deployment manifest (usually in a YAML file) that describes your application and its desired state, including the container image, replicas, and other configuration options. For example: + +~~~{.yml caption="deployment.yaml"} +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-app-deployment +spec: + replicas: 3 + selector: + matchLabels: + app: my-app + template: + metadata: + labels: + app: my-app + spec: + containers: + - name: my-app-container + image: your-registry/your-app-image:latest + ports: + - containerPort: 80 +~~~ + +Here you need to update the container registry and image with proper values. + +Step 2: Apply the Deployment +Use the `kubectl apply` command to create or update the Deployment: + +~~~{.bash caption=">_"} +kubectl apply -f deployment.yaml +~~~ + +Step 3: Monitor the Deployment +You can monitor the progress of the rolling update using the `kubectl rollout status` command: + +~~~{.bash caption=">_"} +kubectl rollout status deployment my-app-deployment +~~~ + +Step 4: Perform the Rolling Update +To perform the rolling update, you can update the image version in the Deployment manifest to the new version. For example, change the image tag from `latest` to a specific version: + +~~~{.yml caption="deployment.yaml"} +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-app-deployment +spec: + replicas: 3 + strategy: + type: RollingUpdate + rollingUpdate: + maxSurge: 1 + # Maximum number of pods that can be created beyond the desired count + maxUnavailable: 1 + # Maximum number of pods that can be unavailable at a time + selector: + matchLabels: + app: my-app + template: + metadata: + labels: + app: my-app + spec: + containers: + - name: my-app-container + image: your-registry/your-app-image:v2.0.0 + ports: + - containerPort: 80 +~~~ + +Step 5: Apply the Update +Apply the updated Deployment manifest to trigger the rolling update: + +~~~{.bash caption=">_"} +kubectl apply -f deployment.yaml +~~~ + +Step 6: Monitor the Rolling Update +Monitor the rolling update's progress using the same `kubectl rollout status` command as before: + +~~~{.bash caption=">_"} +kubectl rollout status deployment my-app-deployment +~~~ + +Kubernetes will now gradually update the pods in the Deployment by terminating the old instances and creating new ones with the updated image. The rolling update will be controlled to maintain the specified number of replicas during the process, ensuring the application remains available. + +### Advantages and Disadvantages of Rolling Deployment + +Advantages of rolling deployments: + +* No downtime +* Easy to implement +* Can be used with any type of application + +Disadvantages of rolling deployments: + +* Can be slow, especially if you have a large number of pods +* Can be difficult to troubleshoot if there are problems with the new version of the application + +## Blue-Green Deployment in Kubernetes + +A blue-green deployment is a more advanced deployment strategy that can be used to minimize downtime. In a blue-green deployment, you have two identical deployments of your application: one in production (the "blue" deployment) and one in staging (the "green" deployment). + +When you are ready to deploy a new version of your application, you first deploy it to the green deployment. Once the green deployment is up and running, you then switch traffic from the blue deployment to the green deployment. + +Kubernetes makes it relatively straightforward to implement blue-green deployment using its native features like `Services` and `Deployments`. Here's how you can do it: + +### Step 1: Create Blue and Green Deployments + +Create two separate Deployment manifests - one for the current live version (blue) and another for the new version (green). Both deployments should have the same labels, so they can be accessed through the same Service. For example: + +~~~{.yml caption="blue-deployment.yaml"} +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-app-blue +spec: + replicas: 3 + selector: + matchLabels: + app: my-app + version: blue + template: + metadata: + labels: + app: my-app + version: blue + spec: + containers: + - name: my-app-container + image: your-registry/your-app-image:1.0.0 + ports: + - containerPort: 80 +~~~ + +~~~{.yml caption="green-deployment.yaml"} +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-app-green +spec: + replicas: 3 + selector: + matchLabels: + app: my-app + version: green + template: + metadata: + labels: + app: my-app + version: green + spec: + containers: + - name: my-app-container + image: your-registry/your-app-image:2.0.0 + ports: + - containerPort: 80 +~~~ + +Notice here the green deployment has different `version` label and different image tag. + +### Step 2: Create a Service + +Next, you need to create a Service that will serve as the entry point for accessing your application. This Service will route traffic to the current live version (blue). The selector in the Service should match the labels of the blue Deployment. For example: + +~~~{.yml caption="blue-deployment.yaml"} +apiVersion: v1 +kind: Service +metadata: + name: my-app-service +spec: + selector: + app: my-app + version: blue + ports: + - protocol: TCP + port: 80 + targetPort: 80 +~~~ + +### Step 3: Deploy Blue Version + +Apply the blue Deployment and the Service to deploy the current live version: + +~~~{.bash caption=">_"} +kubectl apply -f blue-deployment.yaml +kubectl apply -f service.yaml +~~~ + +### Step 4: Test Blue Version + +Verify that the blue version is working correctly and serving traffic as expected. + +~~~{.bash caption=">_"} +kubectl get deployment +kubectl get service +~~~ + +Further steps can be done from your side to verify traffic flow from service resource to deployment resource to the running pod. + +### Step 5: Deploy Green Version + +Apply the green Deployment to deploy the new version: + +~~~{.bash caption=">_"} +kubectl apply -f green-deployment.yaml +~~~ + +### Step 6: Switch Traffic to Green Version + +Update the Service's selector to match the labels of the green Deployment: + +~~~{.bash caption=">_"} +kubectl patch service my-app-service -p \ +'{"spec":{"selector":{"version":"green"}}}' +~~~ + +Now, the Service will route traffic to the green deployment, making the new version live (green), while the blue environment remains available. + +### Step 7: Test Green Version + +Verify that the green version is working correctly and serving traffic as expected. + +At this point, you have completed the blue-green deployment. If any issues arise with the green version, you can quickly switch back to the blue version by updating the Service's selector to match the labels of the blue Deployment again. + +Note: It's essential to monitor the deployment and perform appropriate testing before switching traffic from blue to green and vice versa. + +## Advantages and Disadvantages of Blue-Green Deployment + +Advantages of blue-green deployments: + +* Very little downtime +* Easy to troubleshoot +* Can be used with any type of application + +Disadvantages of blue-green deployments: + +* Requires more resources than a rolling deployment +* Can be more complex to implement if you have large amount of dependant applications + +## Recreate Deployment + +A Recreate Deployment can lead to a temporary downtime during the update process, as all old instances of your application are completely replaced with the new version. Here's how you can implement the "Recreate" deployment strategy in Kubernetes: + +### Step 1: Create a Deployment Manifest + +Create a Deployment manifest YAML file that describes your application and its desired state. This manifest should include the specifications for both the old version `v1` and the new version `new-version` of your application. Here's a basic example: + +~~~{.yml caption="deployment.yaml"} +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-app-deployment +spec: + replicas: 3 # Number of replicas for the new version + selector: + matchLabels: + app: my-app + template: + metadata: + labels: + app: my-app + spec: + containers: + - name: my-app-container + image: your-registry/your-app-image:v1 + ports: + - containerPort: 80 +~~~ + +### Step 2: Apply the Deployment + +Apply the Deployment manifest using the `kubectl apply` command: + +~~~{.bash caption=">_"} +kubectl apply -f deployment.yaml +~~~ + +### Step 3: Monitor the Rollout + +Monitor the progress of the rollout using the `kubectl rollout status` command: + +~~~{.bash caption=">_"} +kubectl rollout status deployment my-app-deployment +~~~ + +### Step 4: Update the Deployment with Recreate Strategy + +To implement the "Recreate" strategy, you need to update the Deployment with the new version image and apply the changes. Kubernetes will automatically manage the recreation of pods. + +Edit the Deployment manifest to update the image to the new version and specify the deployment strategy: + +~~~{.yml caption="deployment.yaml"} +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-app-deployment +spec: + replicas: 3 # Number of replicas for the new version + strategy: + type: Recreate ## K8s deployment strategy + selector: + matchLabels: + app: my-app + template: + metadata: + labels: + app: my-app + spec: + containers: + - name: my-app-container + image: your-registry/your-app-image:new-version + # Updated image version + ports: + - containerPort: 80 +~~~ + +Apply the changes: + +~~~{.bash caption=">_"} +kubectl apply -f deployment.yaml +~~~ + +### Step 5: Monitor the Rollout Again + +Monitor the progress of the rollout as Kubernetes terminates the old pods and creates new pods with the updated configuration. + +Keep in mind that the "Recreate" strategy can result in a brief downtime during the update process since all instances of the old version are stopped before the new version is fully deployed. Therefore, it's essential to plan updates during maintenance windows or use strategies like a rolling deployment if downtime is a concern for your application's availability. + +## Advantages and Disadvantages of Recreate Deployment + +Advantages of `recreate` deployments: + +* Simple to implement +* Can be used with any type of application + +Disadvantages of `recreate` deployments: + +* Can cause downtime +* Can be difficult to troubleshoot if there are problems with the new version of the application + +## Canary Deployment + +A canary deployment is a deployment strategy that gradually introduces a new version of your application to your users. In a canary deployment, you start by deploying a small number of pods with the new version of your application. These pods are then monitored to see how they perform. If the new version of the application is performing well, you then gradually increase the number of pods with the new version. + +Kubernetes provides native features like Services and Deployments to implement canary deployments. Here's how you can do it: + +### Step 1: Create Stable Deployment + +Create another Deployment manifest for your stable version of the application. This will be the stable Deployment. It should have the same number of replicas as your full desired number of instances. For example: + +~~~{.yml caption="stable-deployment.yaml"} +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-app-stable +spec: + replicas: 8 + # Set the full desired number of replicas for the stable Deployment + selector: + matchLabels: + app: my-app + template: + metadata: + labels: + app: my-app + spec: + containers: + - name: my-app-container + image: your-registry/your-app-image:1.0.0 + # Current stable version image + ports: + - containerPort: 80 +~~~ + +### Step 2: Create a Service + +Create a Service that will be used as the entry point for accessing your application. This Service should be a "LoadBalancer" or a "NodePort" type, depending on your infrastructure setup. It will route traffic to the stable Deployment. For example: + +~~~{.yml caption="service.yaml"} +apiVersion: v1 +kind: Service +metadata: + name: my-app-service +spec: + selector: + app: my-app + ports: + - protocol: TCP + port: 80 + targetPort: 80 + type: LoadBalancer # or NodePort +~~~ + +Apply the stable deployment and the service then monitor the sable Deployment to ensure that it is functioning correctly and serving traffic as expected. + +~~~{.bash caption=">_"} +kubectl apply -f stable-deployment.yaml +kubectl apply -f service.yaml +~~~ + +### Step 3: Create Canary Deployment + +Create a Deployment manifest for the new version of your application. This will be the canary Deployment. You can set the number of replicas for this Deployment to a small percentage of your overall desired number of instances. For example: + +~~~{.yml caption="canary-deployment.yaml"} +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-app-canary +spec: + replicas: 2 + # Set a small number of replicas for the canary Deployment + selector: + matchLabels: + app: my-app + template: + metadata: + labels: + app: my-app + spec: + containers: + - name: my-app-container + image: your-registry/your-app-image:2.0.0 + # New version image + ports: + - containerPort: 80 +~~~ + +### Step 4: Apply and Test Canary Deployment + +Apply the canary Deployment and monitor to ensure that it is functioning correctly and serving traffic as expected. Perform appropriate testing to validate the new version. + +~~~{.bash caption=">_"} +kubectl apply -f canary-deployment.yaml +~~~ + +### Step 5: Gradually Increase Traffic to Canary + +Update the Service configuration to gradually route more traffic to the canary Deployment. You can use Kubernetes' weight property for this: + +~~~{.bash caption=">_"} +kubectl patch svc my-app-service -p '{"spec":{"ports":\ +[{"port":80,"targetPort":80,"protocol":"TCP","name":"http",\ +"nodePort":null,"port":80,"targetPort":80,"protocol":"TCP",\ +"name":"http","nodePort":null,"weight":20}]}}' +~~~ + +In this example, the weight of the canary Deployment is set to 20, which means it will receive 20% of the incoming traffic. + +### Step 6: Monitor Canary Deployment + +Keep monitoring the canary Deployment to ensure there are no issues as it receives more traffic. + +### Step 7: Gradually Increase Traffic to Canary + +If everything looks good, continue increasing the traffic to the canary Deployment by updating the Service's weight accordingly. + +### Step 8: Complete the Deployment + +Once you are confident that the canary Deployment is stable and performs well, update the Service configuration to direct all traffic to the canary Deployment: + +~~~{.bash caption=">_"} +kubectl patch svc my-app-service -p '{"spec":{"ports":\ +[{"port":80,"targetPort":80,"protocol":"TCP","name":"http",\ +"nodePort":null,"port":80,"targetPort":80,"protocol":"TCP",\ +"name":"http","nodePort":null,"weight":100}]}}' +~~~ + +The canary Deployment will now receive 100% of the incoming traffic, and the stable Deployment can be safely scaled down or removed. + +By following these steps, you can implement a canary deployment strategy in Kubernetes to test and gradually roll out new versions of your application while minimizing the risk of introducing issues to all users. + +## Advantages and Disadvantages of Canary Deployment + +Advantages of canary deployments: + +* Can be used to test new versions of your application with real users +* Can help you to identify problems with the new version of the application early on +* Can be used to gradually roll out a new version of your application to your users + +Disadvantages of canary deployments: + +* Can be more complex to implement than a rolling deployment +* Requires more resources than a rolling deployment + +## Conclusion + +The choice of deployment strategy depends on factors like the desired update speed, tolerance for downtime, risk tolerance, and the need for testing new versions before full rollout. Each strategy has its advantages and limitations, so it's essential to select the one that best suits your application and business requirements. + +If you need to minimize downtime and deploy different versions at the same time, then a blue-green deployment or a canary deployment may be a good choice. If you need a simple and easy-to-implement deployment strategy, then a rolling deployment may be a better option. + +{% include_html cta/bottom-cta.html %} diff --git a/blog/_posts/2023-09-29-rust-api-rocket-diesel.md b/blog/_posts/2023-09-29-rust-api-rocket-diesel.md new file mode 100644 index 000000000..553abc7a6 --- /dev/null +++ b/blog/_posts/2023-09-29-rust-api-rocket-diesel.md @@ -0,0 +1,471 @@ +--- +title: "Building APIs with Rust Rocket and Diesel" +categories: + - Tutorials +toc: true +author: Ukeje Goodness +editor: Muhammad Badawy + +internal-links: + - building apis + - building with rust rocket and diesel + - building apis with the help of rust + - rust rocket and diesel + - apis with rust +--- + +Rust is a formidable contender in the backend development scene, drawing attention for its unparalleled emphasis on speed, memory safety, and concurrency. Rust's popularity has propelled it to the forefront of high-performance application development, making it an irresistible choice for those seeking performance and security in their codebase. + +Harnessing the full potential of Rust's capabilities entails navigating its expansive ecosystem of libraries and tools, a common pain point new Rust developers face. + +In this tutorial, you'll learn about Rust's API development process, focusing on a key player in the Rust web framework arena – Rocket. Rocket is recognized for its concise syntax that simplifies route definition and HTTP request handling. Furthermore, you'll explore Rust's compatibility with various databases, from PostgreSQL to MySQL and SQLite, facilitating seamless data persistence within your applications. + +### Prerequisites + +You'll need to meet a few prerequisites to understand and follow this hands-on tutorial: + +1. You have experience working with Rust and have Rust installed on your machine. +2. Experience working with the Diesel package and SQL databases in Rust is a plus. + +Head to [the Rust installations page](https://www.rust-lang.org/tools/install) to install Rust on your preferred operating system. + +## Getting Rust Rocket and Diesel + +Once you've set up your Rust workspace with [Cargo](https://www.makeuseof.com/cargo-and-crates-with-third-party-packages-in-rust/), add the Rocket and Diesel packages to the `dependencies.toml` file that Cargo created during the project initialization: + +~~~{.toml caption="dependencies.toml"} +[dependencies] +diesel = { version = "1.4.5", features = ["sqlite"] } +dotenv = "0.15.0" +serde = { version = "1.0", features = ["derive"] } +serde_json = "1.0" +rocket_contrib = "0.4.11" +rocket_codegen = "0.4.11" +rocket = "0.4.11" +serde_derive = "1.0.163" +~~~ + +You've specified that you want to use version `0.5.4` of the [Rocket crate](https://rocket.rs) and version `1.4.5` of the [Diesel crate](https://diesel.rs) with its `sqlite` feature. + +You'll use the `serde` and `serde_json` crates for JSON serialization and deserialization. + +Here's the list of imports and directives you'll need to build your API: + +~~~{.rs caption="main.rs"} +#![feature(proc_macro_hygiene, decl_macro)] + +#[macro_use] +extern crate diesel; + +use diesel::prelude::*; +use rocket::delete; +use rocket::get; +use rocket::post; +use rocket::put; +use rocket::routes; +use rocket_contrib::json::{Json, JsonValue}; +use serde_json::json; + +use serde_derive::{Deserialize, Serialize}; + +use crate::schema::student::dsl::student; + +mod schema; +~~~ + +After importing the necessary types and functions, you can set up your database and build your API. + +## Setting Up the Database for Persistence with Diesel + +![Database]({{site.images}}{{page.slug}}/database.png)\ + +Diesel provides a CLI tool that makes setting up persistence and interacting with the database easier. + +Run this command in the terminal of your working directory to install the Diesel CLI tool: + +~~~{.bash caption=">_"} +cargo install diesel_cli --features sqlite +~~~ + +After installing the tool, create an environment variables file and declare a `DATABASE_URL` variable for your database URL. + +Here's a command you can run on your terminal to create the file and insert the database URL for an SQLite database. + +~~~{.bash caption=">_"} +echo DATABASE_URL=database.db > .env +~~~ + +In this case, `database.db` is the database URL relative to your current working directory since you're using a SQLite in-memory database. + +Next, use the `diesel setup` command to set up your database. Diesel will connect to the database to ensure the URL is correct. + +~~~{.bash caption=">_"} +diesel setup +~~~ + +Then, set up auto migration for easier persistence on the database with the `migration generate` command that takes the table name as an argument. Setting up automatic migrations help with easier database entries. + +~~~{.bash caption=">_"} +diesel migration generate create_students +~~~ + +On running the command, Diesel will create a directory with two files: `up.sql` and `down.sql`. Executing the `up.sql` file will help create tables and entries, while executing the `down.sql` file will drop the database tables depending on your specification. + +Open the `up.sql` file and paste the SQL statement to create your app's table(s). + +~~~{.sql caption="up.sql"} +-- Your SQL goes here + +CREATE TABLE "students" +( + "id" INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, + "first_name" TEXT NOT NULL, + "last_name" TEXT NOT NULL, + "age" INTEGER NOT NULL +); +~~~ + +Add the SQL statement that drops your created tables in the `down.sql` file. + +~~~{.sql caption="down.sql"} +-- down.sql + +-- This file should undo anything in `up.sql` +DROP TABLE "students" +~~~ + +After editing the `up.sql` and `down.sql` files, run the `migration run` command to run pending migrations for the database connection. + +~~~{.bash caption=">_"} +diesel migration run +~~~ + +You'll find a [`schema.rs`](http://schema.rs) file in your project's `src` directory containing code for interacting with the database tables. + +~~~{.rs caption="schema.rs"} +// @generated automatically by Diesel CLI. + +diesel::table! { + student (id) { + id -> Integer, + first_name -> Text, + last_name -> Text, + age -> Integer, + } +} +~~~ + +Attach the `schema.rs` file to your `main.rs` file with the `mod schema` directive to use the contents of the `schema.rs` file in the `main.rs` file. + +You must declare structs for data serialization, migrations, and deserialization operations. Create a `models.rs` file and add struct definitions to match your database schema. + +Here are the structs for the CRUD operations: + +~~~{.rs caption="models.rs"} +#[derive(Queryable, Serialize)] +pub struct Student { + pub id: i32, + pub first_name: String, + pub last_name: String, + pub age: i32, +} + +#[derive(Queryable, Insertable, Serialize, Deserialize)] +#[table_name = "student"] +pub struct NewStudent<'a> { + pub first_name: &'a str, + pub last_name: &'a str, + pub age: i32, +} + +#[derive(Deserialize, AsChangeset)] +#[table_name = "student"] +pub struct UpdateStudent { + first_name: Option, + last_name: Option, + age: Option, +} +~~~ + +The request handler functions will return the `Student` struct. You'll use the `NewStudent` for data migration and the `UpdateStudent` struct for update operations. The DELETE operation doesn't need a struct since you'll delete entries from the database with the `id`. + +Here you've successfully set up the database, and you can start building your API that interacts with the database of Diesel. + +Next, you'll write the program for CRUD operations on the database based on incoming requests to the server. + +## The POST Request Handler Function + +Your POST request handler function will retrieve JSON data from the client, parse the request, insert the data into the database, and return a JSON message to the client after a successful insertion process. + +Here's the function signature of the POST request handler function: + +~~~{.rs caption="main.rs"} +#[post("/student", format = "json", data = "")] +pub fn create_student(new_student: Json) -> Json { + +} +~~~ + +The `create_student` function takes in a `Json` object of the `NewStudent` type and returns a `Json` object of the `JsonValue` type. + +The `#[post("/student", format = "json", data = "")]` line is a Rocket attribute that specifies the HTTP method, URL path, and data format for the handler function. + +Here's the full function that establishes a database connection and inserts the data into the database: + +~~~{.rs caption="main.rs"} +#[post("/student", format = "json", data = "")] +pub fn create_student(new_student: Json) -> Json { + let connection = establish_connection(); + let new_student = NewStudent { + first_name: new_student.first_name, + last_name: new_student.last_name, + age: 17, + }; + + diesel::insert_into(crate::schema::student::dsl::student) + .values(&new_student) + .execute(&connection) + .expect("Error saving new student"); + + Json(JsonValue::from(json!({ + "status": "success", + "message": "Student has been created", + + }))) +} +~~~ + +The `connection` variable is the connection instance, and the `new_student` variable is an instance of the `NewStudent` struct containing data from the request. + +The `create_student` function inserts the `new_student` struct instance into the database with the `values` method diesel's `insert_into` function before returning the response to the client. + +In your `main` function, you'll ignite a rocket instance with the `ignite` function and mount the routes on a base route with the `mount` function that takes in the base route and a list of routes. + +Finally, you'll call the `launch` function on your rocket instance to start the server. + +~~~{.rs caption="main.rs"} +fn main() { + rocket::ignite().mount("/", routes![ + create_student, + ]).launch(); +} +~~~ + +On running your project with the `cargo run` command, the rocket should start a server on port `8000`, and you can proceed to make API calls to your POST request endpoint. + +
+![Result from running the server]({{site.images}}{{page.slug}}/qLvOzrq.jpg) +
+ +Here's a CURL request that sends a POST request with a JSON payload to the `student` endpoint: + +~~~{.bash caption=">_"} +curl -X POST http://localhost:8000/student -H \ +'Content-Type: application/json' -d \ +'{"first_name": "John", "last_name": "Doe", "age": 17}' +~~~ + +Here's the result of running the CURL request: + +
+![Result from sending the POST request]({{site.images}}{{page.slug}}/RrobV0A.jpg) +
+ +## The GET Request Handler Function + +Your GET request handler function will return all the entries in the database as JSON to the client. +Here's the function signature of the GET request handler function: + +~~~{.rs caption="main.rs"} +#[get("/students")] +pub fn get_students() -> Json { + +} +~~~ + +The `get_students` function doesn't take in any values and returns a `Json` object of the `JsonValue` type. + +Here's the full function that establishes a database connection and retrieves the data from the database: + +~~~{.rs caption="main.rs"} +#[get("/students")] +pub fn get_students() -> Json { + let connection = establish_connection(); + + let students = student.load::(&connection)\ + .expect("Error loading students"); + + Json(JsonValue::from(json!({ + "students": students, + }))) +} +~~~ + +The `get_students` function retrieves all the `Student` entries from the database with the `load` function and returns the values with the `json!` macro. + +Add the `get_students` function to your `routes!` to register the handler function on the rocket instance and run your application. + +~~~{.rs caption="main.rs"} +fn main() { + rocket::ignite().mount("/", routes![ + get_students, + create_student, + + ]).launch(); +} +~~~ + +On running your app, you should be able to hit the `/student` endpoint with a GET request that retrieves all the entries in the database. + +Here's the CURL request that hits the `/student` endpoint and retrieves entries in the database: + +~~~{.bash caption=">_"} +curl http://localhost:8000/students +~~~ + +Here's the from running the CURL GET request: + +
+![Result from sending the GET request]({{site.images}}{{page.slug}}/FXr2D8W.jpg) +
+ +## The PUT Request Handler Function + +Your PUT request handler function will update an entry in the database after searching for the entity with the matching `id` field. + +Here's the function signature of the GET request handler function: + +~~~{.rs caption="main.rs"} +#[put("/students/", data = "")] +pub fn update_student(id: i32, update_data: Json)\ + -> Json { + +} +~~~ + +The `update_student` function takes in the `id` and a `Json` object of the `UpdateStudent` type and returns a `Json` object of the `JsonValue` type. + +Here's the full function that establishes a database connection and updates values in the database: + +~~~{.rs caption="main.rs"} +#[put("/students/", data = "")] +pub fn update_student(id: i32, update_data: Json) ->\ + Json { + let connection = establish_connection(); + + // Use the `update` method of the Diesel ORM to update + // the student's record + let _updated_student = diesel::update(student.find(id)) + .set(&update_data.into_inner()) + .execute(&connection) + .expect("Failed to update student"); + + // Return a JSON response indicating success + Json(JsonValue::from(json!({ + "status": "success", + "message": format!("Student {} has been updated", id), + }))) +} +~~~ + +After establishing the connection with the `establish_connection` function, the `update_student` function updates the entity in the database with the value from the `update_data` parameter after searching for a matching `id` with the `find` function. +The `update_student` function returns a message containing the ID of the updated entity after a successful operation. + +Add the `update_students` function to your `routes!` to register the handler function on the rocket instance and run your application. + +~~~{.rs caption="main.rs"} +fn main() { + rocket::ignite().mount("/", routes![ + get_students, + create_student, + update_student, + ]).launch(); +} +~~~ + +On running your app, you should be able to hit the `/students/` endpoint with a PUT request that updates the entity that has the specified `id` value. + +Here's a CURL request that sends a `PUT` request to the server: + +~~~{.bash caption=">_"} +curl -X PUT http://localhost:8000/students/1 -H \ +'Content-Type: application/json' -d \ +'{"first_name": "Jane", "last_name": "Doe", "age": 18}' +~~~ + +Here's the result of the update operation attempt for the row with the `id` equal to 1. + +
+![Result from sending the PUT request]({{site.images}}{{page.slug}}/ZSVGidf.jpg) +
+ +## The DELETE Request Handler Function + +Your DELETE request handler function will delete an entry from the database after searching for the entity with the matching `id` field. + +Here's the function signature of the DELETE request handler function: + +~~~{.rs caption="main.rs"} +#[delete("/students/")] +pub fn delete_student(id: i32) -> Json { + +} +~~~ + +The `delete_student` function takes in the `id` of the entity you want to delete and returns a `Json` object of the `JsonValue` type. + +Here's the full function that establishes a database connection and deletes values from the database: + +~~~{.rs caption="main.rs"} +#[delete("/students/")] +pub fn delete_student(id: i32) -> Json { + let connection = establish_connection(); + + diesel::delete(student.find(id)).execute(&connection)./ + expect(&format!("Unable to find student {}", id)); + + Json(JsonValue::from(json!({ + "status": "success", + "message": format!("Student with ID {} has been deleted", id), + }))) +} +~~~ + +The `delete_student` function deletes the entity from the database with the `delete` function after searching for the entity with the `find` function. + +The `delete_student` function returns a message containing the ID of the deleted entity after a successful operation. + +Add the `delete_student` function to your `routes!` to register the handler function on the rocket instance and run your application. + +~~~{.rs caption="main.rs"} +fn main() { + rocket::ignite().mount("/", routes![ + get_students, + delete_student, + create_student, + update_student, + ]).launch(); +} +~~~ + +On running your app, you should be able to hit the `/students/` endpoint with a DELETE request that deletes the entity that has the specified `id` value. + +Here's a CURL request that sends a `DELETE` request to the `/students/` endpoint on the server: + +~~~{.bash caption=">_"} +curl -X DELETE http://localhost:8000/students/1 +~~~ + +Here's the result of the delete operation attempt for the row with the `id` equal to 1: + +
+![Result from sending the delete request]({{site.images}}{{page.slug}}/VYsP2gG.jpg) +
+ +## Conclusion + +You've learned how to build a CRUD REST API with Rust's Rocket and Diesel libraries. + +You can check out [Rocket](http://rocket.rs) and [Diesel's](http://diesel.rs) documentation to learn more about these libraries for more advanced operations like using WebSockets and defining custom middleware. + +{% include_html cta/bottom-cta.html %} diff --git a/blog/_posts/2023-10-02-using-github-actions-locally.md b/blog/_posts/2023-10-02-using-github-actions-locally.md new file mode 100644 index 000000000..6fe3504a3 --- /dev/null +++ b/blog/_posts/2023-10-02-using-github-actions-locally.md @@ -0,0 +1,353 @@ +--- +title: "How to Test and Run GitHub Actions Locally" +categories: + - Tutorials +toc: true +author: Kumar Harsh + +internal-links: + - test and run gitHub actions + - how to run github actions locally + - using github actions locally + - testing github actions locally + - how does github actions run locally +--- + +[GitHub Actions](https://docs.github.com/en/actions) is GitHub's approach to automating development workflows, enabling you to create, build, test, and deploy software. Additionally, with GitHub Actions, you can build automation around GitHub's offerings, such as triaging GitHub issues and creating GitHub releases. + +However, developing a GitHub Actions workflow can be time-consuming. The process involves committing and pushing your changes to your workflows to the remote repository repeatedly to test them. This not only increases the time spent in perfecting your workflows but also adds unnecessary commits and logs to your repo's version history. + +Fortunately, several workarounds exist to facilitate local execution and testing of GitHub Actions. For instance, you could use a parallel identical repo to test your workflows before adding them to the main repository, or you could use the official [GitHub Actions Runner](https://github.com/actions/runner) in a self-hosted environment. However, a more seamless and widely used solution is a tool called [`act`](https://github.com/nektos/act) that uses [Docker](https://www.docker.com/) containers to run and test your actions locally. In this article, you'll learn all about `act` and how to use it to quickly build and test GitHub Actions workflows. + +## How to Run GitHub Actions Locally + +![how]({{site.images}}{{page.slug}}/how.png)\ + +Before installing `act`, you need to have Docker ([Docker Desktop](https://www.docker.com/products/docker-desktop/) for Mac and Windows, and [Docker Engine](https://docs.docker.com/engine/) for Linux) set up on your system. + +You'll also need to [clone this repository](https://github.com/krharsh17/hello-react.git) with the following command: + +~~~{.bash caption=">_"} +git clone https://github.com/krharsh17/hello-react.git +~~~ + +This repository contains a sample React app that was created using [Vite](https://vitejs.dev/) and defines three GitHub Actions workflows. You'll use them later when exploring the `act` CLI. + +### Install `act` + +Once you've cloned the repository, it's time to install `act` on your system. The specific instructions for various operating systems are available in the [official GitHub documentation](https://github.com/nektos/act#installation). + +If you're on a Mac, you can use [Homebrew](https://brew.sh/) to install it by running the following command in your terminal: + +~~~{.bash caption=">_"} +brew install act +~~~ + +To ensure `act` was installed correctly, run the following command: + +~~~{.bash caption=">_"} +act --version +~~~ + +This should print the version of the installed `act` tool: + +~~~{.bash caption=">_"} +act version 0.2.49 +~~~ + +This indicates that the tool was installed correctly, and you can proceed to testing the workflows. + +> Make sure that Docker is running on the system when using the `act` tool. + +### Explore `act` + +`act` offers a user-friendly interface for running workflows. You can begin by running the following default command to run all workflows that are triggered by a [GitHub push event](https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#push): + +~~~{.bash caption=">_"} +act +~~~ + +If this is the first time you're running the tool, it asks you to choose the default Docker image you'd like to use: + +~~~{.bash caption=">_"} +% act +? Please choose the default image you want to use with act: + + - Large size image: +20GB Docker image, includes almost all tools used on GitHub Actions (IMPORTANT: currently only ubuntu-18.04 platform is available) + - Medium size image: ~500MB, includes only necessary tools to bootstrap actions and aims to be compatible with all actions + - Micro size image: <200MB, contains only NodeJS required to bootstrap actions, doesn't work with all actions + +Default image and other options can be changed manually in ~/.actrc (please refer to https://github.com/nektos/act#configuration for additional information about file structure) [Use arrows to move, type to filter, ? for more help] + Large +> Medium + Micro +~~~ + +If you want to build complex workflows that make use of multiple actions and other features from GitHub Actions, you should choose the `Large size image`. However, using this image takes up a large amount of your system's resources. In most cases, the medium-sized image is the optimal choice. You can always switch between the image types by updating your `.actrc` file (more on this later). + +After you select the image type, you'll notice that all three workflows are triggered (take note of the prefix of each line of the logs): + +~~~{.bash caption=">_"} + +[Create Release/release ] 🚀 Start image=catthehacker/ubuntu:act-latest +[Create Production Build/build] 🚀 Start image=catthehacker/ubuntu:act-latest +[Run tests/test ] 🚀 Start image=catthehacker/ubuntu:act-latest +[Create Release/release ] 🐳 docker pull image=catthehacker/ubuntu:act-latest platform= username= forcePull=true +[Run tests/test ] 🐳 docker pull image=catthehacker/ubuntu:act-latest platform= username= forcePull=true +[Create Production Build/build] 🐳 docker pull image=catthehacker/ubuntu:act-latest platform= username= forcePull=true +... +~~~ + +All three workflows are triggered because they define the `push` event as their trigger. Some workflows may complete running successfully, while some may fail (due to a lack of some extra configuration that you may need to add to run them locally). In the next section, you'll learn how to use the `act` tool to test various types of workflows. + +### Useful Options to Help You Test Various Types of Workflows + +In this section, you'll learn of some of the useful options that the `act` CLI offers to help you test various types of workflows and jobs easily. + +#### List All Jobs + +One of the basic options provided by `act` is `-l`. The `-l` flag enables you to list all jobs in your repository. + +Run the following command in the sample repository to view a list of all the jobs in it: + +~~~{.bash caption=">_"} +% act -l +Stage Job ID Job name Workflow name Workflow file Events +0 build build Create Production Build build-for-prod.yml push +0 release release Create Release create-release.yml push +0 test test Run tests run-tests.yml push +~~~ + +This code defines the ID and name of the job, the name of the workflow it belongs to, and its file, as well as the events that can trigger it. In repos that have a large number of workflows, this command is helpful to quickly list and find workflows. + +#### Run Workflows Triggered by Specific Events + +`act` also enables you to trigger workflows on the basis of the event that they're triggered by. As you learned previously, simply running `act` implements all workflows that are set to be triggered by the `push` event. To run workflows associated with any other event, you can run `act `. Or to run all workflows set to be triggered on a pull request, you can run the following command: + +~~~{.bash caption=">_"} +act pull_request +~~~ + +You'll notice that the tool doesn't print anything because the sample repo doesn't have any eligible workflows. + +#### Run Specific Jobs + +Apart from running workflows on the basis of their trigger event, you can also run a specific job directly using the `-j` flag followed by the name of the job. For instance, to run the `test` job, you can use the following command: + +~~~{.bash caption=">_"} +act -j test +~~~ + +This runs the `test` job and prints its output on the terminal. Your output looks like this: + +~~~{.bash caption=">_"} + +[Run tests/test] 🚀 Start image=catthehacker/ubuntu:act-latest +[Run tests/test] 🐳 docker pull image=catthehacker/ubuntu:act-latest platform= username= forcePull=true +[Run tests/test] using DockerAuthConfig authentication for docker pull +[Run tests/test] 🐳 docker create image=catthehacker/ubuntu:act-latest platform= entrypoint=["tail" "-f" "/dev/null"] cmd=[] +[Run tests/test] 🐳 docker run image=catthehacker/ubuntu:act-latest platform= entrypoint=["tail" "-f" "/dev/null"] cmd=[] +[Run tests/test] ⭐ Run Main Checkout +[Run tests/test] 🐳 docker cp src=/Users/kumarharsh/Work/Draft/hello-react/. dst=/Users/kumarharsh/Work/Draft/hello-react +[Run tests/test] ✅ Success - Main Checkout +[Run tests/test] ⭐ Run Main Set up dev dependencies +[Run tests/test] 🐳 docker exec cmd=[bash --noprofile --norc -e -o pipefail /var/run/act/workflow/1] user= workdir= +| +| added 246 packages, and audited 247 packages in 6s +| +| 52 packages are looking for funding +| run `npm fund` for details +| +| found 0 vulnerabilities +[Run tests/test] ✅ Success - Main Set up dev dependencies +[Run tests/test] ⭐ Run Main Run tests +[Run tests/test] 🐳 docker exec cmd=[bash --noprofile --norc -e -o pipefail /var/run/act/workflow/2] user= workdir= +| +| > hello-react@0.0.0 test +| > vitest +| +| +| RUN v0.34.1 /Users/kumarharsh/Work/Draft/hello-react +| +| ✓ src/App.test.jsx (2 tests) 1ms +| +| Test Files 1 passed (1) +| Tests 2 passed (2) +| Start at 02:56:29 +| Duration 167ms (transform 18ms, setup 0ms, collect 8ms, tests 1ms, environment 0ms, prepare 45ms) +| +[Run tests/test] ✅ Success - Main Run tests +[Run tests/test] 🏁 Job succeeded +~~~ + +#### Do a Dry Run + +`act` also allows you to do a dry run of your workflows, meaning you can check the workflow configuration for correctness. However, it doesn't take into account whether the jobs and steps mentioned in the workflow will work at runtime. That means you can't fully rely on dry runs to know if your workflow will perform as expected when deployed. However, it's a good way to find and fix any silly syntactical mistakes. To see this in action, run the following command: + +~~~{.bash caption=">_"} +act -j release -n +~~~ + +Here's what your output looks like: + +~~~{.bash caption=">_"} + +*DRYRUN* [Create Release/release] 🚀 Start image=catthehacker/ubuntu:act-latest +*DRYRUN* [Create Release/release] 🐳 docker pull image=catthehacker/ubuntu:act-latest platform= username= forcePull=true +*DRYRUN* [Create Release/release] 🐳 docker create image=catthehacker/ubuntu:act-latest platform= entrypoint=["tail" "-f" "/dev/null"] cmd=[] +*DRYRUN* [Create Release/release] 🐳 docker run image=catthehacker/ubuntu:act-latest platform= entrypoint=["tail" "-f" "/dev/null"] cmd=[] +*DRYRUN* [Create Release/release] ☁ git clone 'https://github.com/actions/create-release' # ref=v1 +*DRYRUN* [Create Release/release] ⭐ Run Main Checkout code +*DRYRUN* [Create Release/release] ✅ Success - Main Checkout code +*DRYRUN* [Create Release/release] ⭐ Run Main Create Release +*DRYRUN* [Create Release/release] ✅ Success - Main Create Release +*DRYRUN* [Create Release/release] 🏁 Job succeeded +~~~ + +This shows that the workflow is syntactically correct. However, if you try running this workflow using the `act -j release` command, you'll face the following error: + +~~~{.bash caption=">_"} + % act -j release +[Create Release/release] 🚀 Start image=catthehacker/ubuntu:act-latest +[Create Release/release] 🐳 docker pull image=catthehacker/ubuntu:act-latest platform= username= forcePull=true +[Create Release/release] using DockerAuthConfig authentication for docker pull +[Create Release/release] 🐳 docker create image=catthehacker/ubuntu:act-latest platform= entrypoint=["tail" "-f" "/dev/null"] cmd=[] +[Create Release/release] 🐳 docker run image=catthehacker/ubuntu:act-latest platform= entrypoint=["tail" "-f" "/dev/null"] cmd=[] +[Create Release/release] ☁ git clone 'https://github.com/actions/create-release' # ref=v1 +[Create Release/release] ⭐ Run Main Checkout code +[Create Release/release] 🐳 docker cp src=/Users/kumarharsh/Work/Draft/hello-react/. dst=/Users/kumarharsh/Work/Draft/hello-react +[Create Release/release] ✅ Success - Main Checkout code +[Create Release/release] ⭐ Run Main Create Release +[Create Release/release] 🐳 docker cp src=/Users/kumarharsh/.cache/act/actions-create-release@v1/ dst=/var/run/act/actions/actions-create-release@v1/ +[Create Release/release] 🐳 docker exec cmd=[node /var/run/act/actions/actions-create-release@v1/dist/index.js] user= workdir= +[Create Release/release] ❗ ##[error]Parameter token or opts.auth is required +[Create Release/release] ❌ Failure - Main Create Release +[Create Release/release] exitcode '1': failure +[Create Release/release] 🏁 Job failed +Error: Job 'release' failed +~~~ + +This failure occurred because the **Parameter token or opts.auth is required** and was not provided. This value is provided to GitHub Actions workflows by the GitHub Actions Runner automatically on the cloud. However, you need to pass it in manually when using `act`, which you'll learn how to do in the next section. + +#### Pass Personal Access Tokens + +Some actions in the GitHub Actions workflows, such as interacting with a GitHub API or services, may require a GitHub Personal Access Token (PAT). While the GitHub Actions runtime provides your workflows with a token from your account using the `${{ secrets.GITHUB_TOKEN }}` variable, you need to pass in this value manually to the `act` tool when needed. + +To do so, you can pass it in using the `-s` option with the variable name `GITHUB_TOKEN`. You can either directly input your token in the command line or make use of the `gh` CLI by GitHub to retrieve and supply the token on the fly using the following command: + +~~~ +act -j release -s GITHUB_TOKEN="$(gh auth token)" +~~~ + +#### Pass Secrets + +In the same way you used the `-s` flag to pass in the GitHub token, you can use it to pass other variables as well. Try running the following command to invoke the `release` job and pass in the release description using secrets: + +~~~{.bash caption=">_"} +act -j release -s GITHUB_TOKEN="$(gh auth token)" -s \ +RELEASE_DESCRIPTION="Yet another release" +~~~ + +> Running this command may not work for you since your GitHub token doesn't have permission to create releases in the repo you've cloned. To fix that, fork the repo and then clone your fork. After which, this command runs successfully. + +### Collect Artifacts + +There are workflows that generate or consume artifacts, such as build outputs or executable binaries. GitHub provides a means to upload these artifacts through the [`actions/upload-artifact@v3`](https://github.com/actions/upload-artifact) action to a temporary path in the GitHub Actions runtime where your workflow is running. + +However, when it comes to executing and testing workflows locally, there isn't a GitHub Actions runtime available. That means if you try to run the `build` job in the sample repo, it will fail: + +~~~{.bash caption=">_"} +% act -j build +[Create Production Build/build] 🚀 Start image=catthehacker/ubuntu:act-latest +[Create Production Build/build] 🐳 docker pull image=catthehacker/ubuntu:act-latest platform= username= forcePull=true +[Create Production Build/build] using DockerAuthConfig authentication for docker pull +[Create Production Build/build] 🐳 docker create image=catthehacker/ubuntu:act-latest platform= entrypoint=["tail" "-f" "/dev/null"] cmd=[] +[Create Production Build/build] 🐳 docker run image=catthehacker/ubuntu:act-latest platform= entrypoint=["tail" "-f" "/dev/null"] cmd=[] +[Create Production Build/build] ☁ git clone 'https://github.com/actions/upload-artifact' # ref=v3 +[Create Production Build/build] ⭐ Run Main Checkout repository +[Create Production Build/build] 🐳 docker cp src=/Users/kumarharsh/Work/Draft/hello-react/. dst=/Users/kumarharsh/Work/Draft/hello-react +[Create Production Build/build] ✅ Success - Main Checkout repository +[Create Production Build/build] ⭐ Run Main npm install & build +...[truncated] +| Starting artifact upload +| For more detailed logs during the artifact upload process, enable step-debugging: https://docs.github.com/actions/monitoring-and-troubleshooting-workflows/enabling-debug-logging#enabling-step-debug-logging +| Artifact name is valid! +[Create Production Build/build] ❗ ::error::Unable to get ACTIONS_RUNTIME_TOKEN env variable +[Create Production Build/build] ❌ Failure - Main Archive production artifacts +[Create Production Build/build] exitcode '1': failure +[Create Production Build/build] 🏁 Job failed +Error: Job 'build' failed +~~~ + +The error message says `ACTION_RUNTIME_TOKEN` is missing. This token provides the workflow instance with access to the GitHub Actions Runner runtime, where it can upload and download files. You can give your local runner environment this ability by passing in the `--artifact-server-path` flag. Here's what the output looks like when you pass in a path using this flag: + +~~~{.bash caption=">_"} + +% act -j build --artifact-server-path /tmp/artifacts +INFO[0000] Start server on http://192.168.1.105:34567 +[Create Production Build/build] 🚀 Start image=catthehacker/ubuntu:act-latest +[Create Production Build/build] 🐳 docker pull image=catthehacker/ubuntu:act-latest platform= username= forcePull=true +[Create Production Build/build] using DockerAuthConfig authentication for docker pull +[Create Production Build/build] 🐳 docker create image=catthehacker/ubuntu:act-latest platform= entrypoint=["tail" "-f" "/dev/null"] cmd=[] +[Create Production Build/build] 🐳 docker run image=catthehacker/ubuntu:act-latest platform= entrypoint=["tail" "-f" "/dev/null"] cmd=[] +[Create Production Build/build] ☁ git clone 'https://github.com/actions/upload-artifact' # ref=v3 +[Create Production Build/build] ⭐ Run Main Checkout repository +[Create Production Build/build] 🐳 docker cp src=/Users/kumarharsh/Work/Draft/hello-react/. dst=/Users/kumarharsh/Work/Draft/hello-react +[Create Production Build/build] ✅ Success - Main Checkout repository +[Create Production Build/build] ⭐ Run Main npm install & build +...[truncated] +[Create Production Build/build] 💬 ::debug::A gzip file created for /Users/kumarharsh/Work/Draft/hello-react/dist/vite.svg helped with reducing the size of the original file. The file will be uploaded using gzip. +| Total size of all the files uploaded is 50041 bytes +| File upload process has finished. Finalizing the artifact upload +[Create Production Build/build] 💬 ::debug::Artifact Url: http://192.168.1.105:34567/_apis/pipelines/workflows/1/artifacts?api-version=6.0-preview +[Create Production Build/build] 💬 ::debug::URL is http://192.168.1.105:34567/_apis/pipelines/workflows/1/artifacts?api-version=6.0-preview&artifactName=artifact +[Create Production Build/build] 💬 ::debug::Artifact artifact has been successfully uploaded, total size in bytes: 150909 +| Artifact has been finalized. All files have been successfully uploaded! +| +| The raw size of all the files that were specified for upload is 150909 bytes +| The size of all the files that were uploaded is 50041 bytes. This takes into account any gzip compression used to reduce the upload size, time and storage +| +| Note: The size of downloaded zips can differ significantly from the reported size. For more information see: https://github.com/actions/upload-artifact#zipped-artifact-downloads +| +| Artifact artifact has been successfully uploaded! +[Create Production Build/build] ✅ Success - Main Archive production artifacts +[Create Production Build/build] 🏁 Job succeeded +~~~ + +The `act` runner is now able to upload the production app artifacts to a storage location on the server. This option can help you test and develop workflows that rely on upload and download actions to complete their process. + +### The `.actrc` File + +If you find yourself regularly passing too many options into the `act` CLI, you can make use of the `.actrc` file to define the default options and their values that are passed every time the `act` CLI is called. You might recall that during your initial `act` usage, you selected the default container image for local runner execution. The option that you chose was stored in the `actrc` file and is passed into `act` with every call. This is what the `.actrc` file looked like after you chose the default image: + +~~~{.bash caption=">_"} +-P ubuntu-latest=catthehacker/ubuntu:act-latest +~~~ + +You can use this file to load a set of environment variables by default every time you run the `act` CLI, such as passing in the `GITHUB_TOKEN` variable from the `gh` CLI automatically: + +~~~{.bash caption=">_"} +-P ubuntu-latest=catthehacker/ubuntu:act-latest +-s GITHUB_TOKEN="$(gh auth token)" +~~~ + +You can, of course, set more default options using this file. Feel free to explore the [docs](https://github.com/nektos/act#example-commands) for available options that you can set as defaults when running the `act` CLI. + +This completes the tutorial on `act`. You can find all the code used here [in this GitHub repo](https://github.com/krharsh17/hello-react). + +## Limitations of `act` + +![Limitations]({{site.images}}{{page.slug}}/limit.png)\ + +While `act` is a great tool for setting up a local GitHub Actions workflow development environment, you might run into some issues when working with it. Following are some of the limitations you should be aware of before you get started with it in a project: + +* **Limited environment replication:** `act` doesn't fully replicate the GitHub Actions environment by default. It simulates the workflow runs but doesn't provide exact replicas of the GitHub-hosted runner environments. This can lead to discrepancies when actions rely on specific runner configurations or dependencies. You can consider using images by [`nektos/act-environments`](https://github.com/nektos/act-environments) if you need the closest match of GitHub runners. However, note that these images are quite large in size and might still throw unexpected results if your workflow runs into any [other known issues](https://github.com/nektos/act#known-issues). +* **External services and resources:** Actions that interact with external services or resources may not work as expected when run locally with `act`. For instance, services like databases or cloud resources might not be accessible, impacting the behavior of related actions. In such cases, the output logs might not be descriptive enough. +* **Limited OS support:** `act` primarily supports Linux-based containers. Support for Windows and macOS based platforms is [under discussion](https://github.com/nektos/act/issues/97), but it's unclear how long it will take to implement those. +* **Workflow dependency resolution:** Handling workflow dependencies can be challenging with `act`. If your workflow includes cross-repository dependencies or relies on the behavior of other workflows, `act` may not fully support these scenarios. In such a situation, it's best to set up a test GitHub repo with such workflows and test them on it. +* **Custom actions and workflows:** `act` may not fully support custom actions or workflows that are not part of the official GitHub Actions ecosystem. Due to this, some actions may not behave as expected when run locally. If you notice such a situation, it's best to move to a dedicated GitHub repo to be able to access the complete GitHub Actions runner environment when testing. +* **Limited debugging features:** While `act` provides a way to run workflows locally, it doesn't offer the same debugging capabilities as running actions on GitHub, where you can access logs, artifacts, and other diagnostic information easily. You only get the logs that are printed on the terminal as output, and there's no way to access the intermediate or final artifacts of a workflow. Once again, for workflows that heavily rely on these features, it might be best to switch to a dedicated remote testing GitHub repository. + +A different approach to testing GitHub Actions locally is to write your workflow as an [Earthfile](/) that you run inside GitHub Actions. Earthly's Earthfile's can always be run locally due to containerization. + +{% include_html cta/bottom-cta.html %} diff --git a/blog/assets/images/advanced-git-commands-2/8BijHrc.png b/blog/assets/images/advanced-git-commands-2/8BijHrc.png new file mode 100644 index 000000000..728269319 Binary files /dev/null and b/blog/assets/images/advanced-git-commands-2/8BijHrc.png differ diff --git a/blog/assets/images/advanced-git-commands-2/8OLObm9.png b/blog/assets/images/advanced-git-commands-2/8OLObm9.png new file mode 100644 index 000000000..50aaac4ee Binary files /dev/null and b/blog/assets/images/advanced-git-commands-2/8OLObm9.png differ diff --git a/blog/assets/images/advanced-git-commands-2/Ktwgmn5.png b/blog/assets/images/advanced-git-commands-2/Ktwgmn5.png new file mode 100644 index 000000000..2f28df498 Binary files /dev/null and b/blog/assets/images/advanced-git-commands-2/Ktwgmn5.png differ diff --git a/blog/assets/images/advanced-git-commands-2/Vld0GSX.png b/blog/assets/images/advanced-git-commands-2/Vld0GSX.png new file mode 100644 index 000000000..10aefc29a Binary files /dev/null and b/blog/assets/images/advanced-git-commands-2/Vld0GSX.png differ diff --git a/blog/assets/images/advanced-git-commands-2/ca6733J.png b/blog/assets/images/advanced-git-commands-2/ca6733J.png new file mode 100644 index 000000000..36c90233f Binary files /dev/null and b/blog/assets/images/advanced-git-commands-2/ca6733J.png differ diff --git a/blog/assets/images/advanced-git-commands-2/gRSZul3.jpg b/blog/assets/images/advanced-git-commands-2/gRSZul3.jpg new file mode 100644 index 000000000..74d0674c1 Binary files /dev/null and b/blog/assets/images/advanced-git-commands-2/gRSZul3.jpg differ diff --git a/blog/assets/images/advanced-git-commands-2/header.jpg b/blog/assets/images/advanced-git-commands-2/header.jpg new file mode 100644 index 000000000..c836632a6 Binary files /dev/null and b/blog/assets/images/advanced-git-commands-2/header.jpg differ diff --git a/blog/assets/images/advanced-git-commands-2/n97KMiO.jpg b/blog/assets/images/advanced-git-commands-2/n97KMiO.jpg new file mode 100644 index 000000000..c7002c349 Binary files /dev/null and b/blog/assets/images/advanced-git-commands-2/n97KMiO.jpg differ diff --git a/blog/assets/images/advanced-git-commands-2/oE08Cxg.png b/blog/assets/images/advanced-git-commands-2/oE08Cxg.png new file mode 100644 index 000000000..b093692bd Binary files /dev/null and b/blog/assets/images/advanced-git-commands-2/oE08Cxg.png differ diff --git a/blog/assets/images/advanced-git-commands-2/pz86Pau.png b/blog/assets/images/advanced-git-commands-2/pz86Pau.png new file mode 100644 index 000000000..a944e6c95 Binary files /dev/null and b/blog/assets/images/advanced-git-commands-2/pz86Pau.png differ diff --git a/blog/assets/images/advanced-git-commands-2/vNDp0lX.jpg b/blog/assets/images/advanced-git-commands-2/vNDp0lX.jpg new file mode 100644 index 000000000..8da0a2291 Binary files /dev/null and b/blog/assets/images/advanced-git-commands-2/vNDp0lX.jpg differ diff --git a/blog/assets/images/advanced-git-commands-2/x0ODzpJ.png b/blog/assets/images/advanced-git-commands-2/x0ODzpJ.png new file mode 100644 index 000000000..e0916f688 Binary files /dev/null and b/blog/assets/images/advanced-git-commands-2/x0ODzpJ.png differ diff --git a/blog/assets/images/deployment-strategies-kubernetes/header.jpg b/blog/assets/images/deployment-strategies-kubernetes/header.jpg new file mode 100644 index 000000000..74177b86e Binary files /dev/null and b/blog/assets/images/deployment-strategies-kubernetes/header.jpg differ diff --git a/blog/assets/images/deployment-strategies-kubernetes/strategy.png b/blog/assets/images/deployment-strategies-kubernetes/strategy.png new file mode 100644 index 000000000..a077778af Binary files /dev/null and b/blog/assets/images/deployment-strategies-kubernetes/strategy.png differ diff --git a/blog/assets/images/rust-api-rocket-diesel/FXr2D8W.jpg b/blog/assets/images/rust-api-rocket-diesel/FXr2D8W.jpg new file mode 100644 index 000000000..fd1c7b4c3 Binary files /dev/null and b/blog/assets/images/rust-api-rocket-diesel/FXr2D8W.jpg differ diff --git a/blog/assets/images/rust-api-rocket-diesel/RrobV0A.jpg b/blog/assets/images/rust-api-rocket-diesel/RrobV0A.jpg new file mode 100644 index 000000000..7cf680984 Binary files /dev/null and b/blog/assets/images/rust-api-rocket-diesel/RrobV0A.jpg differ diff --git a/blog/assets/images/rust-api-rocket-diesel/VYsP2gG.jpg b/blog/assets/images/rust-api-rocket-diesel/VYsP2gG.jpg new file mode 100644 index 000000000..76c30f60f Binary files /dev/null and b/blog/assets/images/rust-api-rocket-diesel/VYsP2gG.jpg differ diff --git a/blog/assets/images/rust-api-rocket-diesel/ZSVGidf.jpg b/blog/assets/images/rust-api-rocket-diesel/ZSVGidf.jpg new file mode 100644 index 000000000..c06ad5bc3 Binary files /dev/null and b/blog/assets/images/rust-api-rocket-diesel/ZSVGidf.jpg differ diff --git a/blog/assets/images/rust-api-rocket-diesel/database.png b/blog/assets/images/rust-api-rocket-diesel/database.png new file mode 100644 index 000000000..28fdeff14 Binary files /dev/null and b/blog/assets/images/rust-api-rocket-diesel/database.png differ diff --git a/blog/assets/images/rust-api-rocket-diesel/header.jpg b/blog/assets/images/rust-api-rocket-diesel/header.jpg new file mode 100644 index 000000000..18101fce0 Binary files /dev/null and b/blog/assets/images/rust-api-rocket-diesel/header.jpg differ diff --git a/blog/assets/images/rust-api-rocket-diesel/qLvOzrq.jpg b/blog/assets/images/rust-api-rocket-diesel/qLvOzrq.jpg new file mode 100644 index 000000000..c9e48f58e Binary files /dev/null and b/blog/assets/images/rust-api-rocket-diesel/qLvOzrq.jpg differ diff --git a/blog/assets/images/using-github-actions-locally/header.jpg b/blog/assets/images/using-github-actions-locally/header.jpg new file mode 100644 index 000000000..612f86c43 Binary files /dev/null and b/blog/assets/images/using-github-actions-locally/header.jpg differ diff --git a/blog/assets/images/using-github-actions-locally/how.png b/blog/assets/images/using-github-actions-locally/how.png new file mode 100644 index 000000000..f140947cf Binary files /dev/null and b/blog/assets/images/using-github-actions-locally/how.png differ diff --git a/blog/assets/images/using-github-actions-locally/limit.png b/blog/assets/images/using-github-actions-locally/limit.png new file mode 100644 index 000000000..8f913577e Binary files /dev/null and b/blog/assets/images/using-github-actions-locally/limit.png differ