Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add e2e tests for GCP #4017

Merged
merged 2 commits into from
Jul 14, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 19 additions & 7 deletions tests/integration/.env.sample
Original file line number Diff line number Diff line change
@@ -1,17 +1,29 @@
## Azure
export TF_VAR_azuredevops_org=
export TF_VAR_azuredevops_pat=
export TF_VAR_azure_location=
## These are not terraform variables
## but they are needed for the azure tests
export AZUREDEVOPS_SSH=
export AZUREDEVOPS_SSH_PUB=
# export TF_VAR_azuredevops_org=
# export TF_VAR_azuredevops_pat=
# export TF_VAR_azure_location=
## Set the following only when authenticating using Service Principal (suited
## for CI environment).
# export ARM_CLIENT_ID=
# export ARM_CLIENT_SECRET=
# export ARM_SUBSCRIPTION_ID=
# export ARM_TENANT_ID=

## GCP
# export TF_VAR_gcp_project_id=
# export TF_VAR_gcp_zone=
# export TF_VAR_gcp_region=
# export TF_VAR_gcp_keyring=
# export TF_VAR_gcp_crypto_key=
## Email address of a GCP user used for git repository cloning over ssh.
# export TF_VAR_gcp_email=
## Set the following only when using service account.
## Provide absolute path to the service account JSON key file.
# export GOOGLE_APPLICATION_CREDENTIALS=

## Common variables
# export TF_VAR_tags='{"environment"="dev", "createdat"='"\"$(date -u +x%Y-%m-%d_%Hh%Mm%Ss)\""'}'
darkowlzz marked this conversation as resolved.
Show resolved Hide resolved
## These are not terraform variables
## but they are needed for the bootstrap tests
# export GITREPO_SSH_PATH=
# export GITREPO_SSH_PUB_PATH=
79 changes: 73 additions & 6 deletions tests/integration/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# E2E Tests

The goal is to verify that Flux integration with cloud providers are actually working now and in the future.
Currently, we only have tests for Azure.
Currently, we only have tests for Azure and GCP.

## General requirements

Expand Down Expand Up @@ -55,14 +55,80 @@ the tests:
- `Microsoft.KeyVault/*`
- `Microsoft.EventHub/*`

## GCP

### Architecture

The [gcp](./terraform/gcp) terraform files create the GKE cluster and related resources to run the tests. It creates:
- A Google Container Registry and Artifact Registry
- A Google Kubernetes Cluster
- Two Google Cloud Source Repositories
darkowlzz marked this conversation as resolved.
Show resolved Hide resolved
- A Google Pub/Sub Topic and a subscription to the service that would be used in the tests

Note: It doesn't create Google KMS keyrings and crypto keys because these cannot be destroyed. Instead, you have
to pass in the crypto key and keyring that would be used to test the sops encryption in Flux. Please see `.env.sample`
for the terraform variables
darkowlzz marked this conversation as resolved.
Show resolved Hide resolved

### Requirements

- GCP account with an active project to be able to create GKE and GCR, and permission to assign roles.
- Existing GCP KMS keyring and crypto key.
- [Create a Keyring](https://cloud.google.com/kms/docs/create-key-ring)
- [Create a Crypto Key](https://cloud.google.com/kms/docs/create-key)
- gcloud CLI, need to be logged in using `gcloud auth login` as a User (not a
Service Account), configure application default credentials with `gcloud auth
application-default login` and docker credential helper with `gcloud auth configure-docker`.

**NOTE:** To use Service Account (for example in CI environment), set
`GOOGLE_APPLICATION_CREDENTIALS` variable in `.env` with the path to the JSON
key file, source it and authenticate gcloud CLI with:
```console
$ gcloud auth activate-service-account --key-file=$GOOGLE_APPLICATION_CREDENTIALS
```
Depending on the Container/Artifact Registry host used in the test, authenticate
docker accordingly
```console
$ gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://us-central1-docker.pkg.dev
```
In this case, the GCP client in terraform uses the Service Account to
authenticate and the gcloud CLI is used only to authenticate with Google
Container Registry and Google Artifact Registry.

**NOTE FOR CI USAGE:** When saving the JSON key file as a CI secret, compress
the file content with
```console
$ cat key.json | jq -r tostring
```
to prevent aggressive masking in the logs. Refer
[aggressive replacement in logs](https://github.com/google-github-actions/auth/blob/v1.1.0/docs/TROUBLESHOOTING.md#aggressive--replacement-in-logs)
for more details.
- Register [SSH Keys with Google Cloud](https://cloud.google.com/source-repositories/docs/authentication#ssh)
- Google Cloud supports these three SSH key types: RSA (only for keys with more than 2048 bits), ECDSA and ED25519
- **Note:** Google doesn't allow an SSH key to be associated with a service account email address. Therefore, there has to be an actual
user that the SSH keys are registered to, and the email of this user will be passed to terraform through the `TF_VAR_gcp_email`
variable.

### Permissions

Following roles are needed for provisioning the infrastructure and running the tests:

- Compute Instance Admin (v1)
- Kubernetes Engine Admin
- Service Account User
- Artifact Registry Administrator
- Artifact Registry Repository Administrator
- Cloud KMS Admin
- Cloud KMS CryptoKey Encrypter
- Source Repository Administrator
- Pub/Sub Admin
darkowlzz marked this conversation as resolved.
Show resolved Hide resolved

## Tests

Each test run is initiated by running `terraform apply` in the provider's terraform directory e.g terraform apply,
it does this by using the [tftestenv package](https://github.com/fluxcd/test-infra/blob/main/tftestenv/testenv.go)
within the `fluxcd/test-infra` repository. It then reads the output of the Terraform to get information needed
for the tests like the kubernetes client ID, the azure DevOps repository urls, the key vault ID etc. This means that
a lot of the communication with the Azure API is offset to Terraform instead of requiring it to be implemented in the test.
for the tests like the kubernetes client ID, the cloud repository urls, the key vault ID etc. This means that
a lot of the communication with the cloud provider API is offset to Terraform instead of requiring it to be implemented in the test.
darkowlzz marked this conversation as resolved.
Show resolved Hide resolved

The following tests are currently implemented:

Expand All @@ -72,11 +138,11 @@ The following tests are currently implemented:
- kustomize-controller can decrypt secrets using SOPS and provider key vault
- image-automation-controller can create branches and push to cloud repositories (https+ssh)
- source-controller can pull charts from cloud provider container registry Helm repositories
- notification-controller can forward events to cloud Events Service(EventHub for Azure and Google Pub/Sub)

The following tests are run only for Azure since it is supported in the notification-controller:

- notification-controller can send commit status to Azure DevOps
- notification-controller can forward events to Azure Event Hub

### Running tests locally

Expand Down Expand Up @@ -119,8 +185,9 @@ ok github.com/fluxcd/flux2/tests/integration 947.341s

In the above, the test created a build directory build/ and the flux cli binary is copied build/flux. It would be used
to bootstrap Flux on the cluster. You can configure the location of the Flux CLI binary by setting the FLUX_BINARY variable.
We also pull two version of `ghcr.io/stefanprodan/podinfo` image. These images are pushed to the Azure Container Registry
and used to test `ImageRepository` and `ImageUpdateAutomation`. The terraform resources get created and the tests are run.
We also pull two version of `ghcr.io/stefanprodan/podinfo` image. These images are pushed to the cloud provider's
Container Registry and used to test `ImageRepository` and `ImageUpdateAutomation`. The terraform resources get created
and the tests are run.

**IMPORTANT:** In case the terraform infrastructure results in a bad state, maybe due to a crash during the apply,
the whole infrastructure can be destroyed by running terraform destroy in terraform/<provider> directory.
Expand Down
173 changes: 0 additions & 173 deletions tests/integration/azure_specific_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -21,201 +21,28 @@ package integration

import (
"context"
"encoding/json"
"fmt"
"io"
"log"
"strings"
"testing"
"time"

eventhub "github.com/Azure/azure-event-hubs-go/v3"
"github.com/microsoft/azure-devops-go-api/azuredevops"
"github.com/microsoft/azure-devops-go-api/azuredevops/git"
. "github.com/onsi/gomega"
giturls "github.com/whilp/git-urls"

corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"

kustomizev1 "github.com/fluxcd/kustomize-controller/api/v1"
notiv1 "github.com/fluxcd/notification-controller/api/v1"
notiv1beta2 "github.com/fluxcd/notification-controller/api/v1beta2"
events "github.com/fluxcd/pkg/apis/event/v1beta1"
"github.com/fluxcd/pkg/apis/meta"
sourcev1 "github.com/fluxcd/source-controller/api/v1"
)

func TestEventHubNotification(t *testing.T) {
g := NewWithT(t)

ctx := context.TODO()
branchName := "test-notification"
testID := branchName + "-" + randStringRunes(5)

// Start listening to eventhub with latest offset
// TODO(somtochiama): Make here provider agnostic
hub, err := eventhub.NewHubFromConnectionString(cfg.notificationURL)
g.Expect(err).ToNot(HaveOccurred())
c := make(chan string, 10)
handler := func(ctx context.Context, event *eventhub.Event) error {
c <- string(event.Data)
return nil
}
runtimeInfo, err := hub.GetRuntimeInformation(ctx)
g.Expect(err).ToNot(HaveOccurred())
g.Expect(len(runtimeInfo.PartitionIDs)).To(Equal(1))
listenerHandler, err := hub.Receive(ctx, runtimeInfo.PartitionIDs[0], handler, eventhub.ReceiveWithLatestOffset())
g.Expect(err).ToNot(HaveOccurred())

// Setup Flux resources
manifest := `apiVersion: v1
kind: ConfigMap
metadata:
name: foobar`
repoUrl := getTransportURL(cfg.applicationRepository)
client, err := getRepository(ctx, t.TempDir(), repoUrl, defaultBranch, cfg.defaultAuthOpts)
g.Expect(err).ToNot(HaveOccurred())
files := make(map[string]io.Reader)
files["configmap.yaml"] = strings.NewReader(manifest)
err = commitAndPushAll(ctx, client, files, branchName)
g.Expect(err).ToNot(HaveOccurred())

namespace := corev1.Namespace{
ObjectMeta: metav1.ObjectMeta{
Name: testID,
},
}
g.Expect(testEnv.Create(ctx, &namespace)).To(Succeed())
defer testEnv.Delete(ctx, &namespace)

secret := corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: testID,
Namespace: testID,
},
StringData: map[string]string{
"address": cfg.notificationURL,
},
}
g.Expect(testEnv.Create(ctx, &secret)).To(Succeed())
defer testEnv.Delete(ctx, &secret)

provider := notiv1beta2.Provider{
ObjectMeta: metav1.ObjectMeta{
Name: testID,
Namespace: testID,
},
Spec: notiv1beta2.ProviderSpec{
Type: "azureeventhub",
Address: repoUrl,
SecretRef: &meta.LocalObjectReference{
Name: testID,
},
},
}
g.Expect(testEnv.Create(ctx, &provider)).To(Succeed())
defer testEnv.Delete(ctx, &provider)

alert := notiv1beta2.Alert{
ObjectMeta: metav1.ObjectMeta{
Name: testID,
Namespace: testID,
},
Spec: notiv1beta2.AlertSpec{
ProviderRef: meta.LocalObjectReference{
Name: provider.Name,
},
EventSources: []notiv1.CrossNamespaceObjectReference{
{
Kind: "Kustomization",
Name: testID,
Namespace: testID,
},
},
},
}
g.Expect(testEnv.Create(ctx, &alert)).ToNot(HaveOccurred())
defer testEnv.Delete(ctx, &alert)

g.Eventually(func() bool {
nn := types.NamespacedName{Name: alert.Name, Namespace: alert.Namespace}
alertObj := &notiv1beta2.Alert{}
err := testEnv.Get(ctx, nn, alertObj)
if err != nil {
return false
}
if err := checkReadyCondition(alertObj); err != nil {
t.Log(err)
return false
}

return true
}, testTimeout, testInterval).Should(BeTrue())

modifyKsSpec := func(spec *kustomizev1.KustomizationSpec) {
spec.Interval = metav1.Duration{Duration: 30 * time.Second}
spec.HealthChecks = []meta.NamespacedObjectKindReference{
{
APIVersion: "v1",
Kind: "ConfigMap",
Name: "foobar",
Namespace: testID,
},
}
}
g.Expect(setUpFluxConfig(ctx, testID, nsConfig{
repoURL: repoUrl,
ref: &sourcev1.GitRepositoryRef{
Branch: branchName,
},
path: "./",
modifyKsSpec: modifyKsSpec,
})).To(Succeed())
t.Cleanup(func() {
err := tearDownFluxConfig(ctx, testID)
if err != nil {
t.Logf("failed to delete resources in '%s' namespace: %s", testID, err)
}
})

g.Eventually(func() bool {
err := verifyGitAndKustomization(ctx, testEnv, testID, testID)
if err != nil {
t.Log(err)
return false
}
return true
}, testTimeout, testInterval).Should(BeTrue())

// Wait to read even from event hub
g.Eventually(func() bool {
select {
case eventJson := <-c:
event := &events.Event{}
err := json.Unmarshal([]byte(eventJson), event)
if err != nil {
t.Logf("the received event type does not match Flux format, error: %v", err)
return false
}

if event.InvolvedObject.Kind == kustomizev1.KustomizationKind &&
event.InvolvedObject.Name == testID && event.InvolvedObject.Namespace == testID {
return true
}

return false
default:
return false
}
}, testTimeout, 1*time.Second).Should(BeTrue())
err = listenerHandler.Close(ctx)
g.Expect(err).ToNot(HaveOccurred())
err = hub.Close(ctx)
g.Expect(err).ToNot(HaveOccurred())
}

func TestAzureDevOpsCommitStatus(t *testing.T) {
g := NewWithT(t)

Expand Down
Loading