Skip to content

Commit

Permalink
updates to support OIDC demo for VAULT
Browse files Browse the repository at this point in the history
Signed-off-by: Mariusz Sabath <[email protected]>
  • Loading branch information
mrsabath committed Sep 17, 2021
1 parent a230d48 commit f313c9d
Show file tree
Hide file tree
Showing 5 changed files with 38 additions and 28 deletions.
11 changes: 5 additions & 6 deletions docs/spire-oidc-tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,13 +9,9 @@ In this example we will deploy Tornjak and SPIRE server on OpenShift in IBM Clou
Follow the documentation to deploy [Tornjak on Openshift](./spire-on-openshift.md#deploy-on-openshift])
with exception of enabling the `--oidc` flag:

```console
# check if rootCA is present:
ls sample-keys/CA
rootCA.crt rootCA.key rootCA.srl

```
# install:
utils/install-open-shift-tornjak.sh -c <CLUSTER_NAME> -t <TRUST_DOMAIN> -p <PROJECT_NAME> --oidc
utils/install-open-shift-tornjak.sh -c $CLUSTER_NAME -t $TRUST_DOMAIN -p $PROJECT_NAME --oidc
```

for example:
Expand Down Expand Up @@ -65,6 +61,9 @@ This output confirms that the OIDC endpoint is accessible and responds with vali
Let's install the [SPIRE Agents](./spire-on-openshift.md#step-2-installing-spire-agents-on-openshift):

```
oc new-project spire --description="My TSI Spire Agent project on OpenShift"
kubectl get configmap spire-bundle -n tornjak -o yaml | sed "s/namespace: tornjak/namespace: spire/" | kubectl apply -n spire -f -
export SPIRE_SERVER=spire-server-tornjak.space-x-01-9d995c4a8c7c5f281ce13d5467ff-0000.us-south.containers.appdomain.cloud
utils/install-open-shift-spire.sh -c space-x.01 -s $SPIRE_SERVER -t openshift.space-x.com
Expand Down
35 changes: 22 additions & 13 deletions docs/spire-oidc-vault.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ vault login -no-print "${ROOT_TOKEN}"

## Configure a Vault instace:
We have a script [examples/spire/vault-oidc.sh](../examples/spire/vault-oidc.sh) that configures the Vault instance with the required demo configuration, but before we run it, let's first explain what happens.
**All the commands listed here are in the script, so don't run them!**

First few commands enable the Secret Engine and setup Vault OIDC Federation with
our instance of SPIRE.
Expand All @@ -43,7 +44,7 @@ vault auth enable jwt
Set up our OIDC Discovery URL, using the values created in [OIDC tutorial setup](./spire-oidc-tutorial.md)
and using defalt role **dev**:
```
vault write auth/jwt/config oidc_discovery_url=$SPIRE_SERVER default_role=“dev”
vault write auth/jwt/config oidc_discovery_url=$OIDC_URL default_role=“dev”
```

Define a policy `my-dev-policy` that gives `read` access to `my-super-secret`:
Expand All @@ -70,7 +71,7 @@ cat > role.json <<EOF
"bound_audiences": "vault",
"bound_claims_type": "glob",
"bound_claims": {
"sub":"spiffe://openshift.space-x.com/eu-*/*/*/elon-musk/mars-mission-main/*"
"sub":"spiffe://openshift.space-x.com/region/*/cluster_name/*/ns/*/sa/elon-musk/pod_name/mars-mission-*"
},
"token_ttl": "1h",
"token_policies": "my-dev-policy"
Expand All @@ -89,14 +90,20 @@ Please make sure the following env. variables are set:

or pass them as script parameters:

```console
```
examples/spire/vault-oidc.sh
# or
examples/spire/vault-oidc.sh <OIDC_URL> <ROOT_TOKEN> <VAULT_ADDR>
```
Here is our example:
```console
examples/spire/vault-oidc.sh https://oidc-tornjak.space-x01-9d995c4a8c7c5f281ce13d546a94-0000.us-east.containers.appdomain.cloud $ROOT_TOKEN $VAULT_ADDR
```


Now, create a test secret value:
Once the script successfully completes,
create a test secret value:
```console
vault kv put secret/my-super-secret test=123
```
Expand All @@ -105,20 +112,22 @@ vault kv put secret/my-super-secret test=123
For testing this setup we are going to use
[examples/spire/mars-spaceX.yaml](examples/spire/mars-spaceX.yaml) deployment.

Based on the following annotation:
Make sure the pod label matches the label in The Workload Registrar Configuration.

```yaml

metadata:
annotations:
spire-workload-id: eu-de/space-x.01/default/elon-musk/mars-mission-main/c0d076b51c28dc937a70a469b4cc946fb465ab6c86d6ae89ae2cf8eac1f55d6b
template:
metadata:
labels:
identity_template: "true"
app: mars-mission

```
this container will get the following identity:
this container will get the identity that might look like this:

`eu-de/space-x.01/default/elon-musk/mars-mission-main/c0d076b51c28dc937a70a469b4cc946fb465ab6c86d6ae89ae2cf8eac1f55d6b`
`spiffe://openshift.space-x.com/region/us-east/cluster_name/space-x.01/ns/default/sa/elon-musk/pod_name/mars-mission-7874fd667c-rchk5`

Let's create a container and get inside:
Let's create a pod and get inside the container:

```console
kubectl -n default create -f examples/spire/mars-spaceX.yaml
Expand Down Expand Up @@ -148,7 +157,7 @@ The JWT token is the long string that follows the **token**:
```console
bin/spire-agent api fetch jwt -audience vault -socketPath /run/spire/sock
ets/agent.sock
token(spiffe://openshift.space-x.com/eu-de/space-x.01/default/elon-musk/mars-mission-main/c0d076b51c28dc937a70a469b4cc946fb465ab6c86d6ae89ae2cf8eac1f55d6b):
token(spiffe://openshift.space-x.com/region/us-east/cluster_name/space-x.01/ns/default/sa/elon-musk/pod_name/mars-mission-7874fd667c-rchk5):
eyJhbGciOiJSUzI1NiIs....cy46fb465a
```

Expand All @@ -161,7 +170,7 @@ Export also `eurole` as **ROLE** and actual **VAULT_ADDR**

```console
export ROLE=eurole
export VAULT_ADDR=http://tsi-kube01-9d995c4a8c7c5f281ce13d5467ff6a94-0000.us-south.containers.appdomain.cloud
export VAULT_ADDR=http://tsi-vault-tsi-vault.space-x01-9d995c4a8c7c5f281ce13d546a94-0000.us-east.containers.appdomain.cloud
```
Now let's try to login to Vault using the JWT token:

Expand Down
2 changes: 1 addition & 1 deletion examples/spire/vault-oidc.sh
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ EOF
"bound_audiences": "vault",
"bound_claims_type": "glob",
"bound_claims": {
"sub":"spiffe://openshift.space-x.com/eu-*/*/*/elon-musk/mars-mission-main/*"
"sub":"spiffe://openshift.space-x.com/region/*/cluster_name/*/ns/*/sa/elon-musk/pod_name/mars-mission-*"
},
"token_ttl": "24h",
"token_policies": "my-dev-policy"
Expand Down
2 changes: 1 addition & 1 deletion utils/install-open-shift-spire.sh
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ installSpireAgent(){
oc get projects | grep "${PROJECT}"
if [ "$?" != "0" ]; then
echo "Project $PROJECT must be created first"
echo "oc new-project $PROJECT --description=\"My TSI Spire Agent project on OpenShift\" 2> /dev/null"
echo "oc new-project $PROJECT --description=\"My TSI Spire Agent project on OpenShift\" "
exit 1
fi

Expand Down
16 changes: 9 additions & 7 deletions utils/install-open-shift-tornjak.sh
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Syntax: ${0} -c <CLUSTER_NAME> -t <TRUST_DOMAIN> -p <PROJECT_NAME> --oidc
Where:
-c <CLUSTER_NAME> - name of the OpenShift cluster (required)
-t <TRUST_DOMAIN> - the trust root of SPIFFE identity provider, default: spiretest.com (optional)
-p <PROJECT_NAME> - OpenShift project [namespace] to install the Server, default: spire-server (optional)
-p <PROJECT_NAME> - OpenShift project [namespace] to install the Server, default: tornjak (optional)
--oidc - execute OIDC installation (optional)
--clean - performs removal of project (allows additional parameters i.e. -p|--project).
HELPMEHELPME
Expand All @@ -36,12 +36,13 @@ cleanup() {
oc delete ClusterRole spire-server-role 2>/dev/null
oc delete ClusterRoleBinding spire-server-binding 2>/dev/null

oc delete statefulset.apps/spire-server 2>/dev/null
oc delete scc "$SPIRE_SCC" 2>/dev/null
oc delete sa "$SPIRE_SA" 2>/dev/null
oc delete route spire-server 2>/dev/null
oc delete route tornjak-http 2>/dev/null
oc delete route tornjak-mtls 2>/dev/null
oc delete route tornjak-tls 2>/dev/null
oc delete secret spire-secret tornjak-certs 2>/dev/null
oc delete cm spire-bundle spire-server oidc-discovery-provider 2>/dev/null
oc delete service spire-server spire-oidc tornjak-http tornjak-mtls tornjak-tls 2>/dev/null
oc delete route spire-server tornjak-http tornjak-mtls tornjak-tls oidc 2>/dev/null
oc delete ingress spireingress 2>/dev/null
#oc delete group $GROUPNAME --ignore-not-found=true
#oc delete project "$PROJECT" 2>/dev/null
Expand Down Expand Up @@ -108,11 +109,12 @@ installSpireServer(){
oc get projects | grep $PROJECT
if [ "$?" != "0" ]; then
echo "Project $PROJECT must be created first"
echo "oc new-project $PROJECT --description=\"My TSI Spire SERVER project on OpenShift\" 2> /dev/null"
echo "oc new-project $PROJECT --description=\"My TSI Spire SERVER project on OpenShift\" "
exit 1
fi

oc -n $PROJECT get statefulset spire-server
# test if Tornjak already exists:
oc -n $PROJECT get statefulset spire-server 2>/dev/null
if [ "$?" == "0" ]; then
# check if spire-server project exists:
echo "$PROJECT project already exists. "
Expand Down

0 comments on commit f313c9d

Please sign in to comment.