This repository implements a common inventory system with eventing.
- Go 1.23.1+
- Make
When running locally, (.inventory-api.yaml)[./.inventory-api.yaml] file is used. By default, this configuration does the following:
- Exposes the inventory API in
localhost
and using port8000
for http and port9000
for grpc. - Sets authentication mechanism to
allow-unauthenticated
, allowing users to be authenticated with their user-agent value. - Sets authorization mechanism to
allow-all
. - Configures eventing mechanism to go to stdout.
- Sets database implementation to sqlite3 and the database file to
inventory.db
- Configures log level to
INFO
.
NOTE: You can update the default settings file as required to test different scenarios. Refer to the command line help (make run-help
)
for information on the different parameters.
-
Clone the repository and navigate to the directory.
-
Install the required dependencies
make init
-
Build the project
make build
-
Run the database migration
make migrate
-
Start the development server
make run
Due to various alternatives to running some images, we accept some arguments to override certains tools
Since there are official instructions on how to manage multiple installs
We accept the GO
parameter when running make. e.g.
GO=go1.23.1 make run
or
export GO=go1.23.1
make run
We will use podman
if it is installed, and we will fall back to docker
. You can also specify if you want to ensure a particular binary is used
by providing DOCKER
parameter e.g.
DOCKER=docker make api
or
export DOCKER=docker
make api
See DEBUG for instructions on how to debug
There is Make target to run inventory-api (with postgres) and relations api (with spicedb). It uses compose to build the current inventory code and spin up containers with the required dependecies.
- This provides a PSK file with a token "1234".
- Default ports in this setup are
8081
for http and9091
for grpc. - Refer to inventory-api-compose.yaml for additional configuration
To start use:
make inventory-up
To stop use:
make inventory-down
In order to use the kafka configuration, one has to run strimzi and zookeeper. You can do this by running;
make inventory-up-kafka
Start Kessel Inventory and configuring it to connect to kafka:
eventing:
eventer: kafka
kafka:
bootstrap-servers: "localhost:9092"
# Adapt as required
# security-protocol: "SASL_PLAINTEXT"
# sasl-mechanism: PLAIN
You can use our default config with kafka by running:
INVENTORY_API_CONFIG="./kafka-inventory-api.yaml" make run
- Refer to kafka-inventory-api.yaml for additional configuration
Once started, you can watch the messages using kcat (formerly known as kafkacat) or by exec into the running container like this:
source ./scripts/check_docker_podman.sh
KAFKA_CONTAINER_NAME=$(${DOCKER} ps | grep inventory-api-kafka | awk '{print $1}')
${DOCKER} exec -i -t ${KAFKA_CONTAINER_NAME} /bin/bash
# Once in the container
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic kessel-inventory
Manually terminate Kessel inventory and then run the following to stop kafka:
make inventory-down-kafka
Similar as above, but instead of running Kafka, this will configure inventory to use a Keycloak service for authentication.
- Sets up a keycloak instance running at port 8084 with myrealm config file.
- Set up a default service account with clientId:
test-svc
. Refer to get-token to learn how to fetch a token. - Refer to sso-inventory-api.yaml for additional configuration
To start use:
make inventory-up-sso
Once it has started, you will need to fetch a token and use it when making calls to the service.
To get a token use:
make get-token
You can then export an ENV with that value and use in calls such as:
curl -H "Authorization: bearer ${TOKEN}" # ...
To stop use:
make inventory-down-sso
Instructions to deploy Kessel Inventory in an ephemeral cluster can be found on Kessel docs
Once there is any change in the proto
files (under (/api/kessel)[./api/kessel]) an update is required.
This command will generate code and an (openapi)[./openapi.yaml] file from the proto files
.
make api
We can run the following command to update if there are expected breaking changes.
make api_breaking
By default, the quay repository is quay.io/cloudservices/kessel-inventory
. If you wish to use another for testing, set IMAGE value first
export IMAGE=your-quay-repo # if desired
make docker-build-push
This is an alternative to the above command for macOS users, but should work for any arch
export QUAY_REPO_INVENTORY=your-quay-repo # required
podman login quay.io # required, this target assumes you are already logged in
make build-push-minimal
All these examples use the REST API and assume we are running the default local version adjustments needs to be made to the curl requests if running with different configuration, such as port, authentication mechanisms, etc.
The Kessel Inventory includes health check endpoints for readiness and liveness probes.
The readyz endpoint checks if the service is ready to handle requests.
curl http://localhost:8000/api/inventory/v1/readyz
The livez endpoint checks if the service is alive and functioning correctly.
curl http://localhost:8000/api/inventory/v1/livez
Resources can be added, updated and deleted to our inventory. Right now we support the following resources:
rhel-host
integration
k8s-cluster
k8s-policy
To add a rhel-host to the inventory, use the following curl
command
curl -H "Content-Type: application/json" --data "@data/host.json" http://localhost:8000/api/inventory/v1beta1/resources/rhel-hosts
To update it:
curl -XPUT -H "Content-Type: application/json" --data "@data/host.json" http://localhost:8000/api/inventory/v1beta1/resources/rhel-hosts
and finally, to delete it, note that we use a different file, as the only required information is the reporter data.
curl -XDELETE -H "Content-Type: application/json" --data "@data/host-reporter.json" http://localhost:8000/api/inventory/v1beta1/resources/rhel-hosts
To add a k8s-policy_ispropagatedto-k8s-cluster
relationship, first lets add the related resources k8s-policy
and k8s-cluster
.
curl -H "Content-Type: application/json" --data "@data/k8s-cluster.json" http://localhost:8000/api/inventory/v1beta1/resources/k8s-clusters
curl -H "Content-Type: application/json" --data "@data/k8s-policy.json" http://localhost:8000/api/inventory/v1beta1/resources/k8s-policies
And then you can create the relation:
curl -H "Content-Type: application/json" --data "@data/k8spolicy_ispropagatedto_k8scluster.json" http://localhost:8000/api/inventory/v1beta1/resource-relationships/k8s-policy_is-propagated-to_k8s-cluster
To update it:
curl -X PUT -H "Content-Type: application/json" --data "@data/k8spolicy_ispropagatedto_k8scluster.json" http://localhost:8000/api/inventory/v1beta1/resource-relationships/k8s-policy_is-propagated-to_k8s-cluster
And finally, to delete it, notice that the data file is different this time. We only need the reporter data.
curl -X DELETE -H "Content-Type: application/json" --data "@data/relationship_reporter_data.json" http://localhost:8000/api/inventory/v1beta1/resource-relationships/k8s-policy_is-propagated-to_k8s-cluster
The default development config has this option disabled. You can check Alternatives way of running this service for configurations that have Kessel relations enabled.
Supposing Kessel relations is running in localhost:9000
, you can enable it by updating the config as follows:
authz:
impl: kessel
kessel:
insecure-client: true
url: localhost:9000
enable-oidc-auth: false
If you want to enable OIDC authentication with SSO, you can use this instead:
authz:
impl: kessel
kessel:
insecure-client: true
url: localhost:9000
enable-oidc-auth: true
sa-client-id: "<service-id>"
sa-client-secret: "<secret>"
sso-token-endpoint: "http://localhost:8084/realms/redhat-external/protocol/openid-connect/token"
Tests can be run using:
make test
Follow the steps below to contribute:
- Fork the project
- Create a new branch for your feature
- Submit a pull request