copyright | lastupdated | ||
---|---|---|---|
|
2019-04-19 |
{:java: #java .ph data-hd-programlang='java'} {:swift: #swift .ph data-hd-programlang='swift'} {:ios: #ios data-hd-operatingsystem="ios"} {:android: #android data-hd-operatingsystem="android"} {:shortdesc: .shortdesc} {:new_window: target="_blank"} {:codeblock: .codeblock} {:screen: .screen} {:tip: .tip} {:pre: .pre}
{: #users-teams-applications}
This tutorial gives an overview of the concepts available in {{site.data.keyword.cloud_notm}} to manage identity and access management and how they can be implemented to support the multiple development stages of an application. {:shortdesc}
When building an application, it is very common to define multiple environments reflecting the development lifecycle of a project from a developer committing code to the application code being made available to the end-users. Sandbox, test, staging, UAT (user acceptance testing), pre-production, production are typical names for these environments.
Isolating the underlying resources, implementing governance and access policies, protecting a production workload, validating changes before pushing them to production, are some of the reasons why you would want to create these separate environments.
{: #objectives}
- Learn about {{site.data.keyword.iamlong}} and Cloud Foundry access models
- Configure a project with separation between roles and environments
- Setup continuous integration
{: #services}
This tutorial uses the following runtimes and services:
- {{site.data.keyword.iamlong}}
- {{site.data.keyword.containershort_notm}}
- {{site.data.keyword.cos_full_notm}}
- Cloud Foundry
- {{site.data.keyword.cloudantfull}}
This tutorial may incur costs. Use the Pricing Calculator to generate a cost estimate based on your projected usage.
Let's consider a sample project with the following components:
- several microservices deployed in {{site.data.keyword.containershort_notm}},
- databases,
- file storage buckets.
In this project, we define three environments:
- Development - this environment is continuously updated with every commit, unit tests, smoke tests are executed. It gives access to the latest and greatest deployment of the project.
- Testing - this environment is built after a stable branch or tag of the code. This is where user acceptance testing is made. Its configuration is similar to the production environment. It is loaded with realistic data (anonymized production data as example).
- Production - this environment is updated with the version validated in the previous environment.
A delivery pipeline manages the progression of a build through the environment. It can be fully automated or include manual validation gates to promote approved builds between environments - this is really open and should be set up to match the company best practices and workflows.
To support the execution of the build pipeline, we introduce a functional user - a regular {{site.data.keyword.cloud_notm}} user but a team member with no real identity in the physical world. This functional user will own the delivery pipelines and any other cloud resources requiring strong ownership. This approach helps in the case where a team member leaves the company or is moving to another project. The functional user will be dedicated to your project and will not change over the lifetime of the project. The next thing you will want to create is an API key for this functional user. You will select this API key when you setup the DevOps pipelines, or when you want to run automation scripts, to impersonate the functional user.
When it comes to assigning responsibilities to the project team members, let's define the following roles and related permissions:
Development | Testing | Production | |
---|---|---|---|
Developer |
|
|
|
Tester |
|
|
|
Operator |
|
|
|
Pipeline Functional User |
|
|
|
{: #first_objective}
{{site.data.keyword.iamshort}} (IAM) enables you to securely authenticate users for both platform and infrastructure services and control access to resources consistently across the {{site.data.keyword.cloud_notm}} platform. A set of {{site.data.keyword.cloud_notm}} services are enabled to use Cloud IAM for access control and are organized into resource groups within your account to enable giving users quick and easy access to more than one resource at a time. Cloud IAM access policies are used to assign users and service IDs access to the resources within your account.
A policy assigns a user or service ID one or more roles with a combination of attributes that define the scope of access. The policy can provide access to a single service down to the instance level, or the policy can apply to a set of resources organized together in a resource group. Depending on the user roles that you assign, the user or service ID is allowed varying levels of access for completing platform management tasks or accessing a service by using the UI or performing specific types of API calls.
At this time, not all services in the {{site.data.keyword.cloud_notm}} catalog can be managed by using IAM. For these services, you can continue to use Cloud Foundry by providing users access to the organization and space to which the instance belongs with a Cloud Foundry role assigned to define the level of access that is allowed.
Although the three environments needed by this sample project require different access rights and may need to be allocated different capacities, they share a common architecture pattern.
Let's start by building the Development environment.
- Select an {{site.data.keyword.cloud_notm}} location where to deploy the environment.
- For Cloud Foundry services and apps:
- Create an organization for the project.
- Create a Cloud Foundry space for the environment.
- Create the Cloud Foundry services used by the project under this space
- Create a resource group for the environment.
- Create the services compatible with resource group like {{site.data.keyword.cos_full_notm}}, {{site.data.keyword.la_full_notm}}, {{site.data.keyword.mon_full_notm}}, and {{site.data.keyword.cloudant_short_notm}} in this group.
- Create a new Kubernetes cluster in {{site.data.keyword.containershort_notm}}, make sure to select the resource group created above.
- Configure {{site.data.keyword.la_full_notm}} and {{site.data.keyword.mon_full_notm}} to send logs and to monitor the cluster.
The following diagram shows where the project resources are created under the account:
- Invite users to the account
- Assign Policies to the users to control who can access the resource group, the services within the group and the {{site.data.keyword.containershort_notm}} instance and their permissions. Refer to the access policy definition to select the right policies for a user in the environment. Users with the same set of policies can be placed into the same access group. It simplifies the user management as policies will be assigned to the access group and inherited by all users in the group.
- Configure their Cloud Foundry organization and space roles based on their needs within the environment. Refer to the role definition to assign the right roles based on the environment.
Refer to the documentation of services to understand how a service is mapping IAM and Cloud Foundry roles to specific actions. See for example how the {{site.data.keyword.mon_full_notm}} service maps IAM roles to actions.
Assigning the right roles to users will require several iterations and refinement. Given permissions can be controlled at the resource group level, for all resources in a group or be fine-grained down to a specific instance of a service, you will discover over time what are the ideal access policies for your project.
A good practice is to start with the minimum set of permissions then expand carefully as needed. For Kubernetes, you will want to look at its Role-Based Access Control (RBAC) to configure in-cluster authorizations.
For the Development environment, the user responsibilities defined earlier could translate to the following:
IAM Access policies | Cloud Foundry | |
---|---|---|
Developer |
|
|
Tester |
|
|
Operator |
|
|
Pipeline Functional User |
|
|
The IAM access policies and Cloud Foundry roles are defined in the Identify and Access Management user interface:
From there, you can replicate similar steps to build the other environments.
- Create one resource group per environment.
- Create one cluster and required service instances per environment.
- Create one Cloud Foundry space per environment.
- Create the required service instances in each space.
Using a combination of tools like the {{site.data.keyword.cloud_notm}} ibmcloud
CLI, HashiCorp's terraform
, the {{site.data.keyword.cloud_notm}} provider for Terraform, Kubernetes CLI kubectl
, you can script and automate the creation of these environments.
Separate Kubernetes clusters for the environments come with good properties:
- no matter the environment, all clusters will tend to look the same;
- it is easier to control who has access to a specific cluster;
- it gives flexibility in the update cycles for deployments and underlying resources; when there is a new Kubernetes version, it gives you the option to update the Development cluster first, validate your application then update the other environment;
- it avoids mixing different workloads that may impact each other such as isolating the production deployment from the others.
Another approach is to use Kubernetes namespaces in conjunction with Kubernetes resource quotas to isolate environments and control resource consumption.
In the Search
input box of LogDNA UI, use the field namespace:
to filter logs based 0n the namespace.
{: tip}
When it comes to deploying to the different environments, your continuous integration / continuous delivery pipeline can be setup to drive the full process:
- continuously update the
Development
environment with the latest and greatest code from thedevelopment
branch, running unit tests and integration tests on the dedicated cluster; - promote development builds to the
Testing
environment, either automatically if all tests from the previous stages are OK or through a manual promotion process. Some teams will use different branches too here, merging the working development state to astable
branch as example; - Repeat a similar process to move to the
Production
environment.
When configuring the DevOps pipeline, make sure to use the API key of a functional user. Only the functional user should need to have the required rights to deploy apps to your clusters.
During the build phase, it is important to properly version the Docker images. You can use the Git commit revision as part of the image tag, or a unique identifier provided by your DevOps toolchain; any identifier that will make it easy for you to map the image to the actual build and source code contained in the image.
As you get acquainted with Kubernetes, Helm, the package manager for Kubernetes, will become a handy tool to version, assemble and deploy your application. This sample DevOps toolchain is a good starting point and is preconfigured for continuous delivery to a Kubernetes cluster. As your project grows into multiple microservices, the Helm umbrella chart will provide a good solution to compose your application.
Congratulations, your application can now safely be deployed from dev to production. Below are additional suggestions to improve application delivery.
- Add {{site.data.keyword.DRA_short}} to your pipeline to perform quality control during deployments.
- Review team member coding contributions and the interactions between developers with {{site.data.keyword.DRA_short}}.
- Follow the tutorial Plan, create and update deployment environments to automate the deployment of your environments.
- Getting Started with {{site.data.keyword.iamshort}}
- Best practices for organizing resources in a resource group
- Analyze logs and monitor health with LogDNA and Sysdig
- Continuous Deployment to Kubernetes
- Hello Helm toolchain
- Develop a microservices application with Kubernetes and Helm
- Grant permissions to a user to view logs in LogDNA
- Grant permissions to a user to view metrics in Sysdig