This repo contains solution to .NET on Azure video series on YouTube.
- Azure App Service
- Azure Kubernetes Service
- Azure Functions
- Application Insights
- Azure SignalR
- Azure SQL
- Key Vault
- Container Registry
- Blob Storage
- Managed Identity
- Azure Container Apps
- App Config
- Cosmos Db
graph TD
A[Enrollment] ---> B["ABC Organization (Tenant aka Directory)<br>For eg: abc.onmicrosoft.com"]
B --> C1["Contoso(Tenant)<br>For eg: contoso.onmicrosoft.com"]
B --> C2["Fabrikam(Tenant)<br>For eg: fabrikam.onmicrosoft.com"]
B ---> C3["Some Saas Non Azure Subscription"]
C1 --> D11["Dev Sub"]
C1 --> D12["Prod Sub"]
D11 --> E111["Dev RG1"]
D11 --> E112["Dev RG2"]
D12 --> E121["Prod RGs"]
E111 --> F1111["Resources.<br>For eg: AppService, SqlDb etc."]
E112 --> F1121["..."]
E121 --> F1211["..."]
C2 --> D21["Dev Sub"]
C2 --> D22["Prod Sub"]
D21 --> E211["Dev RGs"]
D22 --> E221["Prod RGs"]
E211 --> F2111["..."]
E221 --> F2211["..."]
C3 --> D31["Office 365"]
C3 --> D32["Dynamics 365"]
D31 --> E311["For eg: 100 E5 licenses"]
D32 --> E321["For eg: 50 licenses"]
classDef hidden display: none;
Note: Users live at Tenant/ Directory level.
- Azure billing and cost management construct for Enterprise Agreement customers. ea.azure.com.
- For large companies.
- Is associated with a single entity, i.e. person, company or org and can own one or several subscriptions.
- In my case, it's a person with domain name: affableashkoutlook.onmicrosoft.com and Organization Id (tenant Id):
9d7f6902-2a61-4363-964c-c464b9eaf716
(Found by going to Settings -> Directories + subscriptions OR Menu -> Azure Active Directory). - Contains user accounts and groups.
- EVERY TENANT IS LINKED TO A SINGLE AZURE AD INSTANCE which is shared with all tenant's subscriptions.
- Each tenant is referred to as an organization.
- I can create multiple tenants after logging in to Azure Portal.
- Directory Id is TenantId because of one to one relationship between tenant and Azure AD.
- The reason it's called directory is because each directory has an Azure AD service associated with it.
- Every MSFT service is always associated with an Azure AD even if we're not using Azure.
- For eg: If I'm using O365, I'll have Azure AD at the top of it.
- Construct for creating separate billing and management boundaries. Is managed in portal.azure.com.
- An agreement with MSFT to use one or more MSFT cloud platforms or services for which charges accrue based on either: β Per user license fee. For eg: Saas like Office 365 or Dynamics 365. β Cloud based resource consumption. For eg: Paas and IaaS.
- A subscription is linked to a payment setup and each subscription will result in separate bill.
- Subscription could be CSP (Cloud Service Provider), Pay-As-You-Go, EA etc.
- For customers like Ashish Khanal who can use credit card and do Pay as you go.
- Can create multiple subscriptions in Azure account to create separation.
- A subscription can only be associated with a single Azure AD tenant at any given time.
- Can be linked to existing identity stores for single sign on, or segregated into a separate area.
- Becomes the major separation for assignment of RBAC within services.
- Inside every subscription we can add Resources like VM, SqlDb etc.
- Tenant or Directory has 1:M relationship with Subscription.
- A container that holds related resources.
- Like App Service, Sql Db etc.
- Resources in 1 RG are completely isolated from resources in another RG.
References:
- Subscriptions, licenses, accounts, and tenants
- Tenants and Subscriptions
- Difference between Tenant and Subscription
Go to portal.azure.com. It's pretty self-explanatory.
Follow instructions here.
Follow instructions here.
sudo chown -R $(whoami) /usr/local/var/homebrew
sudo chown -R $(whoami) /usr/local/opt
chmod u+w /usr/local/opt
brew update && brew install azure-cli
az login
Why?
Cloud Shell needs access to manage resources. Access is provided through namespace that must be registered to your subscription.
Get your subscription Id using
az account list (Grab Id)
Then
az account set --subscription <Subscription Name or Id>
az provider register --namespace Microsoft.CloudShell
Note: tenantId
is my DirectoryId and id
is my SubscriptionID.
Search for Subscription from the search bar:
You can see cost of your services inside the Subscription. Click on the Azure subscription 1 shown above.
Create a web app (MunsonPickles.Web
) and an API (MunsonPickles.API
). Take a look at the code to see how they look.
Note about Blazor Web App in .NET 8:
With .NET 8 comes a new template for Blazor applications simply called Blazor Web App
and by default all components use Server-Side-Rendering.
To add interactivity to it, you need to add following service and middleware. More info here.
builder.Services.AddRazorComponents() // π Adds services required to server-side render components. Added by default.
.AddServerComponents(); // π Stuff I added for Server Side Interactivity
app.MapRazorComponents<App>() // π Discovers routable components and sets them up as endpoints. Added by default.
.AddServerRenderMode();// π Stuff I added for Server Side Interactivity
Add interactivity to the new Blazor Web App in .NET 8 using this guide.
Line 13 will create the tables, and line 14 will seed the database.
Explicit migration is not required in the above approach that looks like
dotnet ef migrations add InitialCreate -o Data/Migrations
dotnet ef database update
So when line db.Database.EnsureCreated()
runs, it'll create the database and the next line will initialize the database.
ashish@Azure:~$ az group create -g rg-pitstop-eastus-dev-001 -l eastus
π
-g
is for resource group name, -l
is for location (Remember RALEIgh).
So the convention that I'll be using is:
ResourceGroup-AppName-Location-Environment-Instance
app-APPNAME-WEB(Because it's a web app)-LOCATION-ENV-INSTANCE
Web app runs on App Service Plan which determines CPU and memory of it.
You can name it like this:
asp-APPNAME-LOCATION-ENV-INSTANCE
asp (App Service Plan)
Notice it doesn't have type like web, api etc. after APPNAME. It's because I want to put both the web app and the web api in that app service plan.
Db Server: sqlserver-munson-eastus-dev-001
Allow connections to this SQL server from your IP.
They appear under Firewall rules. Only do this for dev scenarios, NOT for PROD.
And notice that's my IP address:
Db Name: sqldb-munson-eastus-dev-001
Grab connection string (ADO.NET SQL auth):
Server=tcp:sqlserver-munson1-eastus-dev-001.database.windows.net,1433;Initial Catalog=sqldb-munson-eastus-dev-001;Persist Security Info=False;User ID=munson;Password={your_password};
Add this conncection string to dotnet-secrets
.
Set the connection string with this command:
dotnet user-secrets set ConnectionStrings:Default "Server=tcp:sqlserver-munson1-eastus-dev-001.database.windows.net,1433;Initial Catalog=sqldb-munson-eastus-dev-001;Persist Security Info=False;User ID=munson;Password={your_password};"
This will set the connection string like this in the secrets.json:
It actually looks like this (after installing this plugin):
This is where that file is stored. Reference.
Grab the connection string
Server=tcp:sqlserver-munson1-eastus-dev-001.database.windows.net,1433;Initial Catalog=sqldb-munson-eastus-dev-001;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Authentication="Active Directory Default";
The passwordless connection string includes a configuration value of Authentication=Active Directory Default
, which enables Entity Framework Core to use DefaultAzureCredential
to connect to Azure services. When the app runs locally, it authenticates with the user you're signed into Visual Studio with. Once the app deploys to Azure, the same code discovers and applies the managed identity that is associated with the hosted app, which you'll configure later.
At this point you need to be logged into Azure using Azure CLI (az login
), if you are not logged in, you'll get this exception if you try to run the app:
Azure CLI login is shown here.
So far:
Resource Group: rg-munson-eastus-dev-001
App Service: app-munson-web-eastus-dev-001
App Service Plan: asp-munson-eastus-dev-001
Db Server: sqlserver-munson-eastus-dev-001
Db Name: sqldb-munson-eastus-dev-001
A managed identity from Azure Active Directory (Azure AD) allows App Service to access resources through role-based access control (RBAC), without requiring app credentials. After assigning a managed identity to your web app, Azure takes care of the creation and distribution of a certificate. People don't have to worry about managing secrets or app credentials.
This is secret-less way of doing this, that's why I love it. For eg: No credentials in the connection string.
Any service that supports managed identity (B in the following image) can be securely accessed.
Internally, managed identities are service principals of a special type which are locked to only be used with Azure resources.
References
While I'm interacting with my Azure resources, I also talk to my AD to get my token and make requests. Look example here:
Go to Plugins and install Azure Toolkit for Rider.
Go to Tools -> Azure -> Azure Sign In
Go with Device Login
Select my Subscription
Right click on Project -> Publish -> Azure
Select 'Use Existing Web App' and click on the app shown below, like so:
Click Apply -> Run
To publish it again, click configuration dropdown in the top right:
At this point, the app doesn't work correctly on Azure. You still need to configure the secure connection between the App Service and the SQL database to retrieve your data. Read it all about it here.
The following steps are required to connect the App Service instance to Azure SQL Database:
- Create a managed identity for the App Service. The
Microsoft.Data.SqlClient
library included in your app will automatically discover the managed identity, just like it discovered your local machine Azure User. - Create a SQL database user and associate it with the App Service managed identity.
- Assign SQL roles to the database user that allow for read, write, and potentially other permissions.
Use Service connector to accomplish this:
Service Connector is a tool that streamlines authenticated connections between different services in Azure. Service Connector currently supports connecting an App Service to a SQL database via the Azure CLI using the az webapp connection create sql
command. This single command completes the three steps mentioned above for you.
Go to Azure Portal and into the app service. You can see that it doesn't have anything under Identity -> System assigned
Now run this command (run it in Cloud Shell or Azure CLI):
which translates to:
az webapp connection create sql -g rg-sampleapp-eastus-dev-001 -n app-munson-web2-eastus-dev-001 --tg rg-sampleapp-eastus-dev-001 --server sqlserver-munson1-eastus-dev-001 --database sqldb-munson-eastus-dev-001 --system-identity --connection ThisCanBeAnything --client-type dotnet
At this point, you'll have managed Identity showing:
This connection string created by the above command will show up inside Configuration:
Data Source=sqlserver-munson1-eastus-dev-001.database.windows.net,1433;Initial Catalog=sqldb-munson-eastus-dev-001;Authentication=ActiveDirectoryManagedIdentity
The user should show up in the SQL Db as well
Now go to the app url to see your app running.
Unfortunately, it didn't start. :(
Go to App Service -> Diagnose and solve problems -> Availability and Performance
Container crash ->
UPDATE: Running the command again solved the issue for me:
Now the app runs from Azure! π
Keep in mind that when you deploy a web app to Azure, it treats it as Production. 'Production' is default if DOTNET_ENVIRONMENT and ASPNETCORE_ENVIRONMENT is not set. Reference.
Environment values set in launchSettings.json
override values set in the system environment. That file is only used on the local dev machine.
A binary large object (blob) is a collection of binary data stored as a single entity. Blobs are typically images, audio or other multimedia objects, though sometimes binary executable code is stored as a blob.
A general purpose v2 storage account provides access to all of the Azure Storage Services: blobs, files, queues, table and disks.
Blobs in Azure storage are organized into containers.
Before we can upload a blob, we must first create a container.
For eg: I gave 'web' as a name of my folder.
The access level of this container is private by default. To change this to public, go to Configuration -> Allow Blob anonymous access -> Enabled -> Save
Now change access level of this container: -> Blob
We need to grant our web app access to the storage account before we can create, read or delete blobs.
Using Azure RBAC, we can give the managed Identity of the web app access to another Azure resource just like any security principal (User Principal or Service Principal explained earlier in this page).
The 'Storage Blob Contributor' role gives the web app (represented by system assigned managed identity) read, write, and delete access to the blob container and data.
Go to my storage account to grant my web app access to it.
Go to IAM -> Role Assignments This shows who has access to this resource. There's ME!
Let's add role assignment to a robot π€ (Managed Identity)
Select Add -> Add role assignment
Search for 'Storage block data contributor' role
Click Next to Select who needs this role
Managed Identity -> Select Members -> Subscription -> App Service (Web App) ->
The managed Identity shows up.
Select it and hit Next.
Hit 'Review + assign'.
The IAM page looks like this after the assignment:
Now go ahead and upload images to 'web' container using Azure portal. It's a simple file upload from your computer. I uploaded few images of pickles and preserves. π
The thing is these images are only available in the eat US. If I try to hit the blob url from Asia, it'll have to make bunch of internet hops to get to it. So what we can do is put a CDN on top of our blob storage.
CDN lives on the Azure edge.
Go to CDN ->
Give Profile name, Endpoint name and specify Query string caching behavior.
Hit create:
Go to CDN endpoint now
Grab the endpoint hostname that's served through CDN. Origin hostname is being served through the storage living in eastus.
Notice the urls.
Now grab the endpoint hostname + web + filename and update the db:
Update the code to show product photo in a "col" class.
Now the page looks like this:
Explanation on Query string caching behavior options:
- Ignore Query String: The first request is served from the origin server and the response is cached. Subsequent requests are served from the cache whatever the query string is. This sounds ludicrous!
Request1: Browser (mydomain.com/articles?page=3) -> Azure CDN -> Server (mydomain.com/articles?page=3) Request2: Browser (mydomain.com/articles?page=42) -> Azure CDN (from cached whatever the query string)
- Bypass caching for query string: Azure CDN doesn't cache the requests that have a query string.
Request1: Browser (mydomain.com/articles?page=3) -> Azure CDN -> Server (mydomain.com/articles?page=3) Request2: Browser (mydomain.com/articles?page=3) -> Azure CDN -> Server (mydomain.com/articles?page=3)
- Use query string: Each request with a unique url including the query string is treated as a unique asset with its own cache.
Request1: Browser (www.example.ashx?q=test1) -> Azure CDN -> Server (www.example.ashx?q=test1) Request2: Browser (www.example.ashx?q=test1) -> Azure CDN (from cache)
The order of the query string parameters doesn't matter. For example, if the Azure CDN environment includes a cached response for the URL www.example.ashx?q=test1&r=test2
, then a request for www.example.ashx?r=test2&q=test1
is also served from the cache.
Now we want to give users the ability to upload images while giving a review of a product.
For this we need Azure SDKs.
Go to Dependencies -> Manage NuGet Packages and add these packages to the project:
Azure.Storage.Blobs
: To work with Blob storage.Microsoft.Extensions.Azure
: Azure client SDK integration withMicrosoft.Extensions
libraries.
For eg: To get this line to work:
To setup connection to Blob. This article helped.
IMPORTANT: (This wasted few hours and caused a lot of frustration)
Your account needs to have Role Assignment to upload files even though I'm the owner.
Your account comes into the picture from DefaultAzureCredential
used to setup the BlobServiceClient
during local development.
Also as you can see in the screenshot above, Azure App Service (Web app) already has access to it through Managed Identity when it runs in the cloud.
Take a look at the code to see how I implemented file upload using minimal APIs. It's pretty nice!
Everything about adding auth to the app is documented here.
Benefits of containerizing an app:
- All components in a single package.
- Assured new instances the same.
- Quickly spin up new instances.
- Instances can be deployed in many places.
- Helps with agile development because you don't have to pull all the services, you can just work with your "micro" service at a time.
Azure container services:
- Azure Container registry (sort of like Nuget but for images)
- Azure App Service
- Azure Functions
- Azure Container Apps
Abstraction over Kubernetes which is really nice! - Azure Kubernetes Service (AKS)
- Azure Container Instances
Allows to new up containers in the cloud and run them. Not really great for prod scenarios because for eg: if the container goes down, it goes down, no orchestrator to replace it.
For eg: munsonpicklesacr.azurecr.io
dotnet-on-azure/MunsonPickles.API.Dockerfile
Lines 1 to 22 in 1855e45
dotnet-on-azure/MunsonPickles.Web.Dockerfile
Lines 1 to 22 in 1855e45
You can build it in your local computer and push it to the Azure Container Registry (ACR). But for this example, I want to use cloud shell to do this.
Notice that you have docker in there already!
Steps
- Clone your repo (with Dockerfile checked in)
gh repo clone https://github.com/akhanalcs/dotnet-on-azure.git
- Go to the repo folder
cd ./dotnet-on-azure
- Build and push the image to your Azure Container Registry
It looks similar to building an image locally
az acr build --file MunsonPickles.API.Dockerfile . --image pickles/my-cool-api:0.1.0 --registry munsonpicklesacr.azurecr.io
docker build -f MunsonPickles.API.Dockerfile . -t pickles/my-cool-api:0.1.0
Login to Azure ACR from your computer:
az acr login -n munsonpicklesacr
Pull the image down:
MunsonPicklesACR registry -> Repositories -> find the image -> copy the 'docker pull command'
docker pull munsonpicklesacr.azurecr.io/pickles/my-cool-api:0.1.0
Run the image:
docker run --rm -it -p 8000:8080 -e ASPNETCORE_ENVIRONMENT=Development munsonpicklesacr.azurecr.io/pickles/my-cool-api:0.1.0
Create Web App (Azure App Service)
Name: munson-api-linux-westus
Publish: Docker Container
...
After the deployment is complete, go to munson-api-linux-westus
Web App -> Identity (under Settings)
Turn On 'Managed Identity'.
Now go to MunsonPicklesACR
registry, give access to the managed identity you just created so the web app can pull images from this registry.
Go to Access Control -> Add -> Role Assignment (acr pull) -> Assign access to: managed identity -> choose your app service -> Next -> Review + assign
Now go back to app service to tell it which registry it should go to and which image it should pull.
Go to Deployment Center (under Deployment) -> Settings
Container type: Single container
Registry source: Azure Container Registry
Authentication: Managed Identity
Registry: MunsonPicklesACR
Image: pickles/api
Tag: 0.1.0
-> Save
Restart the web app and take it for a test ride by clicking the url. It should work at this point. π
- Cloud provider manages infrastructure
- Allocates resources as necessary
- Scales down to zero. Reap this benefit by making your function focus on a specific task, and not do a whole lot so when it's free, it can scale down to zero.
- Lets you focus on business logic
- It launches in response to events
- Integrates with other Azure services
- Build a web api
- Process file uploaded to Blob storages
- Respond to database changes
- Process message queues
- Analyze IoT streams
- Real time data with SignalR
- Events that start the function.
- Have incoming data: For eg: let's say a HTTP request that triggered a function. You can get the request body or query string of that request as incoming parameters.
- There are triggers for many different services like:
- HTTP
- Timer
- Storage
- Data
- Event Grid
- etc.
- Connect to another service like Send Grid if you wanted to send emails.
- Input and Output bindings so you can have data come in or you can be writing data to various services.
- Can have multiple bindings per Azure function. For eg: let's say we had a HTTP request coming in that triggered a function, we could have a binding to table storage that pull out data and we can have another binding to let's say blob storage that pulls out a blob and pulls that into your function and do something with it.
Whenever some image (picture) is uploaded, it's going to write a message to an Azure storage queue, once that starts it's going to kick off an Azure function queue trigger. The function runs and it's going to use a table output binding just to write some data to table storage.
It'll show input binding, trigger and an output table binding.
In MunsonPickles.API
project:
-
Add storage queue SDK. Install
Azure.Storage.Queues
nuget package. -
Add storage queue endpoint url to appsettings.json. It just has
queue
instead ofblob
in the connection string.
For eg: This is my blob storage connection string:"https://stmunsoneastusdev001.blob.core.windows.net/"
, so the queue connection string will be:"https://stmunsoneastusdev001.queue.core.windows.net/"
-
Register it in Program.cs inside
.AddAzureClients
.azureBuilder.AddQueueServiceClient(new Uri(azQueueConnection)) .ConfigureOptions(opts => { opts.MessageEncoding = QueueMessageEncoding.Base64; // Make sure any message I send is base64 encoded });
-
Now go into
ReviewEndpoint.cs
and add logic to write a message to Azure storage queue after an image is uploaded.using Azure.Storage.Queues using Azure.Storage.Queues.Models // Inject QueueServiceClient queueServiceClient into the "UploadReviewImages" method and use it // Get a queue client for queue called "review-images" var queueClient = queueServiceClient.GetQueueClient("review-images"); // Create the queue if it doesn't exist await queueClient.CreateIfNotExistsAsync(PublicAccessType.Blob); // Send a message to the queue await queueClient.SendMessageAsync($"{loggedInUser} {trustedFileNameForStorage}");
-
Run the app, upload an image and go to your storage account in Azure Portal. Go to Storage Browser -> Queues. You'll see a new queue called
review-images
and you'll see a message there. The message body will be in the format:$"{loggedInUser} {trustedFileNameForStorage}"
.
- Create a new Azure Functions project
MunsonPickles.Functions
.- Pick
Queue trigger
which means "hey run it off a queue". - Specify connection string name. For eg: I chose
PickleStorageConnection
. - Specify queue name which is
review-images
from previous step.
- Pick
- Specify connection strings in
local.settings.json
.
When function runs, it needs a storage where it stores its state. It uses a storage account for that, so specify connection string that points to our storage account as the value ofAzureWebJobsStorage
key. In real prod scenario, you should have a separate storage account dedicated to your Functions.{ "IsEncrypted": false, "Values": { "AzureWebJobsStorage": "Put the connection string with Account Name and Account Key", "FUNCTIONS_WORKER_RUNTIME": "dotnet", "PickleStorageConnection": "Put the connection string with Account Name and Account Key" } }
- When a new message comes in to
review-images
storage queue, it'll trigger the below function which writesReviewImageInfo
data toreviewimagedata
table.[StorageAccount("PickleStorageConnection")] public class QueueMonitor { [FunctionName("QueueMonitor")] [return: Table("reviewimagedata")] // If there's no reviewimagedata table present, it'll create it. public ReviewImageInfo Run( [QueueTrigger("review-images")]string message, ILogger log) { // split the message name based on the space var theParts = message.Split(' '); // user id is the first part var userId = theParts[0]; // image name is the second part var imageName = theParts[1]; log.LogInformation($"C# Queue trigger function processed: {message}"); // write to table storage with information about the blob return new ReviewImageInfo { BlobName = "{userId}/{imageName}", PartitionKey = userId, RowKey = Guid.NewGuid().ToString(), UploadedDate = DateTime.Now, ImageName = imageName }; } } public class ReviewImageInfo { public string PartitionKey { get; set; } public string RowKey { get; set; } public string BlobName { get; set; } public string ImageName { get; set; } public string UserId { get; set; } public DateTime UploadedDate { get; set; } }
- Run the function. Go to 'Storage Browser' -> Tables. You'll see a new table
reviewimagedata
that has been populated withReviewImageInfo
.
Github actions is
- CI/CD platform
- Integrated into your GitHub repository
- They are organized into workflows like build and deploy.
- 'Actions' run on events
- Push
- New Issue
- Pull Request
- etc.
- Defined in a YAML file
The components of GitHub actions
- Workflow (on a clean VM)
- The overall process
- Can have more than 1 workflow per repository
- Executes on a "runner" or server
- Server can be a windows, ubuntu or macos VM
- Event
- Triggers a workflow to run
- Many types (push, PR, issue)
- Job
- A grouping of steps (actions) to execute
- Each job will run inside its own virtual machine runner, or inside a container and has one or more steps.
- Action
- Built-in task that performs a common complex task
Create a file in dotnet-on-azure/.github/workflows/deploy-api.yml
dotnet-on-azure/.github/workflows/deploy-api.yaml
Lines 1 to 44 in e621a6e
-
Add profile you downloaded from step 1 to your GitHub repo
- Open the file you downloaded in step 1 using a text editor like notepad, copy it
- Go to your GitHub repo -> Settings -> Secrets and Variables -> Actions
- Under 'Repository secrets', click 'New repository secret', give it a name "API_PUBLISH_PROFILE" and paste the publish profile there
-
Add environment variables
env: AZURE_WEBAPP_NAME: "app-munson-web2-eastus-dev-001" # Copied from App service name
-
It succeeds π
-
Check the app running in Azure App Service by hitting an endpoint
Remember, it came from