Skip to content

Commit

Permalink
CI: Improve staging OCI images to GHCR, using GHA
Browse files Browse the repository at this point in the history
  • Loading branch information
amotl committed May 20, 2024
1 parent 779d5dc commit 17ab7a3
Show file tree
Hide file tree
Showing 11 changed files with 228 additions and 53 deletions.
1 change: 1 addition & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
*
!tsperf
!release
!setup.py
!MANIFEST.in
!pyproject.toml
Expand Down
146 changes: 146 additions & 0 deletions .github/workflows/oci.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,146 @@
# Stage OCI container images through GitHub Actions (GHA) to GitHub Container Registry (GHCR).
name: OCI

on:
pull_request: ~
push:
tags:
- '*.*.*'

schedule:
- cron: '45 00 * * *' # every day at 00:45 am

# Allow job to be triggered manually.
workflow_dispatch:

# Cancel in-progress jobs when pushing to the same branch.
concurrency:
cancel-in-progress: true
group: ${{ github.workflow }}-${{ github.ref }}

# The name for the produced image at ghcr.io.
env:
IMAGE_NAME: "${{ github.repository }}"

jobs:

build-and-test:
runs-on: ubuntu-latest

steps:
- name: Acquire sources
uses: actions/checkout@v4

- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
architecture: "x64"
cache: "pip"
cache-dependency-path: |
pyproject.toml
setup.py
- name: Build wheel package
run: |
pip install build
python -m build
- name: Upload wheel package
uses: actions/upload-artifact@v4
with:
name: ${{ runner.os }}-wheel-${{ github.sha }}
path: dist/*.whl
retention-days: 7

- name: Run tests
run: |
if [[ -f release/oci/test.yml ]]; then
export DOCKER_BUILDKIT=1
export COMPOSE_DOCKER_CLI_BUILD=1
docker-compose --file release/oci/test.yml build
docker-compose --file release/oci/test.yml run sut
fi
build-and-publish:
needs: build-and-test
runs-on: ubuntu-latest
if: ${{ ! (startsWith(github.actor, 'dependabot') || github.event.pull_request.head.repo.fork ) }}

steps:
- name: Acquire sources
uses: actions/checkout@v4
with:
fetch-depth: 0

- name: Define image name and tags
id: meta
uses: docker/metadata-action@v5
with:
# List of OCI images to use as base name for tags
images: |
ghcr.io/${{ env.IMAGE_NAME }}
# Generate OCI image tags based on the following events/attributes
tags: |
type=schedule,pattern=nightly
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
- name: Inspect metadata
run: |
echo "Tags: ${{ steps.meta.outputs.tags }}"
echo "Labels: ${{ steps.meta.outputs.labels }}"
- name: Set up QEMU
uses: docker/setup-qemu-action@v3

- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3

- name: Cache OCI layers
uses: actions/cache@v4
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
- name: Inspect builder
run: |
echo "Name: ${{ steps.buildx.outputs.name }}"
echo "Endpoint: ${{ steps.buildx.outputs.endpoint }}"
echo "Status: ${{ steps.buildx.outputs.status }}"
echo "Flags: ${{ steps.buildx.outputs.flags }}"
echo "Platforms: ${{ steps.buildx.outputs.platforms }}"
- name: Login to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ github.token }}

- name: Build and push image
uses: docker/build-push-action@v5
with:
context: .
file: release/oci/Dockerfile
platforms: linux/amd64 # linux/arm64,linux/arm/v7
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
push: true
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new

- name: Move cache
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
- name: Display git status
run: |
set -x
git describe --tags --always
git status
14 changes: 0 additions & 14 deletions .github/workflows/release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -43,17 +43,3 @@ jobs:
with:
user: __token__
password: ${{ secrets.PYPI_TOKEN }}

- name: Login to OCI registry
if: steps.get_tag.outputs.tag != steps.get_tag.outputs.last_tag
uses: azure/docker-login@v1
with:
login-server: ${{ secrets.REGISTRY_LOGIN_SERVER }}
username: ${{ secrets.REGISTRY_USERNAME }}
password: ${{ secrets.REGISTRY_PASSWORD }}

- name: Build and push OCI image
if: steps.get_tag.outputs.tag != steps.get_tag.outputs.last_tag
run: |
docker build -t ${{ secrets.REGISTRY_LOGIN_SERVER }}/tsperf:${{ steps.get_tag.outputs.tag }} .
docker push ${{ secrets.REGISTRY_LOGIN_SERVER }}/tsperf:${{ steps.get_tag.outputs.tag }}
14 changes: 0 additions & 14 deletions Dockerfile

This file was deleted.

2 changes: 1 addition & 1 deletion docs/.gitignore
Original file line number Diff line number Diff line change
@@ -1 +1 @@
/_build
_build
51 changes: 31 additions & 20 deletions docs/data-generator.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,8 @@ pip install tsperf
:::{rubric} OCI image
:::

Another way to use the Data Generator is to build the OCI image `tsperf`.
Another way to run the Data Generator is to use the OCI image `ghcr.io/crate/tsperf`.

+ Navigate to root directory of this repository.
+ Build docker image with `docker build -t tsperf -f Dockerfile .`.
+ Adapt one of the example docker-compose files in the [example folder].
+ Start (e.g. CrateDB example) with `docker-compose -f examples/basic-cratedb.yml up`.

Expand Down Expand Up @@ -457,8 +455,8 @@ The value of `TIMESTAMP_DELTA` defines the interval between timestamps of the ge
[Data Generator Schemas](#data-generator-schemas) for more information on schemas.
:Default: empty string

When using a relative path with the docker image, be sure to check out the
`Dockerfile` to make sure to use the correct path.
When using a relative path with the OCI image, be sure to check out the
`Dockerfile`, and build a custom OCI image, in order to use the correct path.

(setting-dg-batch-size)=
#### BATCH_SIZE
Expand Down Expand Up @@ -982,20 +980,24 @@ set the following environment variables:
+ ID_START: 1
+ ID_END: 100

As we want to use CrateDB running on localhost we set the following environment variables:
As we want to use CrateDB running on `localhost`, we set the following environment variables:
+ ADAPTER: "cratedb"
+ ADDRESS: "host.docker.internal:4200" (this is the host when trying to access localhost from inside a docker container)
+ ADDRESS: "host.docker.internal:4200"
+ USERNAME: "aValidUsername"
+ PASSWORD: "PasswordForTheValidUsername"

:::{note}
`host.docker.internal:4200` is the correct database address when trying to access
CrateDB running on `localhost`, from inside a Docker container.
:::

As we want to have a consistent insert every 5 seconds for one hour we set the
following environment variables:
+ INGEST_MODE: 0
+ INGEST_SIZE: 720 (an hour has 3600 seconds divided by 5 seconds)
+ TIMESTAMP_DELTA: 5


And finally we want to signal using the appropriate schema:
Finally, we want to signal using the appropriate schema:
+ SCHEMA: "tsperf.schema.factory.simple:machine.json"

The resulting yml file could look like this:
Expand Down Expand Up @@ -1026,13 +1028,17 @@ services:
To run this example follow the following steps:
+ navigate to root directory of this repository
+ build docker image with `docker build -t tsperf -f Dockerfile .`
+ start an instance of CrateDB on localhost with `docker run -p "4200:4200" crate`
+ Enter USERNAME and PASSWORD in the [simple factory compose file]
+ Start an instance of CrateDB on `localhost`.
```shell
docker run --rm -it --publish="4200:4200" crate
```
+ Enter USERNAME and PASSWORD in the [simple factory compose file].
+ If no user was created, you can just delete both environment variables.
CrateDB will use a default user.
+ start the docker-compose file with `docker-compose -f examples/factory-simple-machine.yml up`
+ Start TSPERF using Docker Compose.
```shell
docker-compose -f examples/factory-simple-machine.yml up
```

You can now navigate to localhost:4200 to look at CrateDB or to localhost:8000 to look at the raw data of the Data Generator.

Expand Down Expand Up @@ -1066,15 +1072,20 @@ this can obviously be adjusted to create a bigger dataset.**

To run this example follow the following steps:

+ navigate to root directory of this repository
+ build docker image with `docker build -t tsperf -f Dockerfile .`
+ start an instance of CrateDB on localhost with `docker run -p "4200:4200" crate`
+ Adjust USERNAME and PASSWORD within the docker-compose file
+ Start an instance of CrateDB on `localhost`.
```shell
docker run --rm -it --publish="4200:4200" crate
```
+ Adjust USERNAME and PASSWORD in the [complex factory compose file].
+ If no user was created, you can just ignore both environment variables.
CrateDB will use a default user.
+ start the docker-compose file with `docker-compose -f examples/factory-complex-scenario.yml up`
+ Start TSPERF using Docker Compose.
```shell
docker-compose -f examples/factory-complex-scenario.yml up
```

You can now navigate to localhost:4200 to look at CrateDB or to localhost:8000 to look at the raw data of the Data Generator.
You can now navigate to `http://localhost:4200/` to look at CrateDB,
or to `http://localhost:8000/` to look at the raw data of the Data Generator.


## Glossary
Expand Down
12 changes: 8 additions & 4 deletions docs/usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,15 +21,19 @@ docker run -it --rm --publish=4200:4200 --publish=5432:5432 crate:4.5.1
# Feed data into CrateDB table.
# Adjust write parameters like `--partition=day --shards=6 --replicas=3`.
tsperf write --adapter=cratedb --schema=tsperf.schema.basic:environment.json
tsperf write --schema=tsperf.schema.basic:environment.json --adapter=cratedb --address=cratedb.example.org:4200

# Use Docker.
docker run -it --rm --network=host tsperf tsperf write --schema=tsperf.schema.basic:environment.json --adapter=cratedb
tsperf write --adapter=cratedb --schema=tsperf.schema.basic:environment.json --address=cratedb.example.org:4200

# Query data from CrateDB table.
tsperf read --adapter=cratedb --query="SELECT * FROM environment LIMIT 10;"
```

Alternatively, use Docker.
```shell
alias tsperf="docker run --rm -it --network=host ghcr.io/crate/tsperf tsperf"
tsperf write --adapter=cratedb --schema=tsperf.schema.basic:environment.json
tsperf read --adapter=cratedb --query="SELECT * FROM environment LIMIT 10;"
```

## CrateDB+PostgreSQL
```shell
# Run CrateDB workload via PostgreSQL protocol.
Expand Down
26 changes: 26 additions & 0 deletions release/oci/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
FROM python:3.11

ENV DEBIAN_FRONTEND noninteractive
ENV TERM linux

# Install dependencies for pyodbc.
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - && \
curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list && \
apt-get update && \
ACCEPT_EULA=Y apt-get install --yes unixodbc-dev msodbcsql17

# Copy sources
COPY . /src

# Install package
RUN --mount=type=cache,id=pip,target=/root/.cache/pip \
pip install --use-pep517 --prefer-binary '/src'

# Designate default program to invoke
CMD ["tsperf"]

# Purge /src and /tmp directories.
RUN rm -rf /src /tmp/*

# Copy selftest.sh to the image
COPY release/oci/selftest.sh /usr/local/bin
10 changes: 10 additions & 0 deletions release/oci/selftest.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
#!/bin/bash

# Fail on error.
set -e

# Display all commands.
# set -x

echo "Invoking tsperf"
tsperf --version
4 changes: 4 additions & 0 deletions release/oci/test.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
sut:
build: ../..
dockerfile: release/oci/Dockerfile
command: selftest.sh
1 change: 1 addition & 0 deletions tsperf/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -314,6 +314,7 @@


@cloup.group("tsperf", help=f"See documentation for further details: {TSPERF_README_URL}")
@click.version_option()
def main():
setup_logging()

Expand Down

0 comments on commit 17ab7a3

Please sign in to comment.