This document contains the release process which should be followed when generating a new release of the IBM Security Verify Access operator.
The version number should be of the format: v<year>.<month>.0
, for example: v23.3.0
.
In order to generate a new version of the operator a new GitHub release should be created: https://github.com/IBM-Security/verify-access-operator/releases/new.
The fields for the release should be:
Field | Description |
---|---|
Tag | The version number, e.g. v23.3.0 |
Release title | The version number, e.g. v23.3.0 |
Release description | The resources associated with the <version-number> IBM Security Verify Access operator release. |
After the release has been created the GitHub actions workflow (https://github.com/IBM-Security/verify-access-operator/actions/workflows/build.yml) will be executed to generate the build. This build process will include:
- publishing the generated docker images to DockerHub;
- adding the manifest zip and bundle.yaml files to the release artifacts in GitHub.
Once a new GitHub release has been generated the updated operator bundle needs to be published to OperatorHub.io. Information on how to do this can be found at the following URL: https://k8s-operatorhub.github.io/community-operators/.
At a high level you need to (taken from: https://k8s-operatorhub.github.io/community-operators/contributing-via-pr/):
- Test the operator locally.
- Fork the GitHub project.
- Add the operator bundle to the verify-access-operator directory.
- Push a 'signed' commit of the changes. See https://k8s-operatorhub.github.io/community-operators/contributing-prerequisites/. The easiest way to sign the commit is to use the
git commit -s -m '<description>'
command to commit the changes. - Contribute the changes back to the main GitHub repository (using the 'Contribute' button in the GitHub console). This will have the effect of creating a new pull request against the main GitHub repository.
- Monitor the 'checks' against the pull request to ensure that all of the automated test cases pass.
- Wait for the pull request to be merged. This will usually happen overnight.
Certification projects are managed through the RedHat Partner Connect Portal.
At a high level, to certify the operator, you need to:
-
Create a 'certification project' for the operator using the RedHat Partner Connect Portal (instructions);
- Provide the details of the operator on the 'Settings' tab;
- Test the operator and submit a pull request.
It is important that in the pull request the images contained within the cluster service version file are updated, replacing the tag name with the corresponding sha256 digest. You will also need to add a
spec.relatedImages
entry into the file which contains all of the images which are used by the operator (just copy a cluster service version file from a previous version of the operator and update the sha256 digest for each image).
As a part of the certification process you need to test your bundle. You can do this locally, or by using the hosted pipeline. Both mechanisms are not without problems.
Instructions on how to run the tests using the hosted pipeline are available at: https://github.com/redhat-openshift-ecosystem/certification-releases/blob/main/4.9/ga/hosted-pipeline.md.
At a high level you need to:
- Fork a copy of the GitHub repo: https://github.com/redhat-openshift-ecosystem/certified-operators;
- Add the bundle descriptor for the latest release to the
operators/ibm-security-verify-access-operator
directory; - Commit the changes. The message in the commit should be: 'operator ibm-security-verify-access-operator (vv.vv.vv)'.
- Push the changes, and then create a new pull request against the main GitHub repo.
- Wait for the pipeline testing to complete, and debug any failures.
Instructions on how to run the tests locally are available at: https://github.com/redhat-openshift-ecosystem/certification-releases/blob/main/4.9/ga/ci-pipeline.md
I was never able to successfully run the tests in my local OpenShift environment, although after a lot of trial and error I was able to make some limited progress. Some points to note about running the tests locally:
- You need to create a default storage class (type: no-provisioner);
- You need to create a new persistent volume using the yaml included below;
- You need to modify the
templates/workspace-template.yaml
file to reference the new PV:volumeName: pv0001
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv0001
spec:
capacity:
storage: 50Gi
nfs:
server: 10.22.82.15
path: /data/certify
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: manual
volumeMode: Filesystem