Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Schema Registry and REST Proxy as opt-in folder #102

Merged
merged 37 commits into from
Feb 3, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
3a12d2e
Adds schema registry
solsson Jul 25, 2017
c14d4cc
Avoids the schema-registry service name, becuase it interferes ...
solsson Jul 25, 2017
55f5b31
This is http so there's no reason to not use port 80
solsson Jul 25, 2017
dfc65f1
Adds kafka-rest proxy as rest:80
solsson Jul 25, 2017
d340d41
Tests rest-proxy, but hangs on GET /consumers/.../records
solsson Jul 27, 2017
8b1ddad
Adds more tests, and retries
solsson Jul 27, 2017
4fc6880
Starts a test case for rest-proxy
solsson Jul 29, 2017
713743e
We should probably wait for CP 3.3.0 because ...
solsson Jul 29, 2017
f587622
Sample config from v3.3.0 source
solsson Jul 31, 2017
2d4401b
Remove file appender as docker implies stdout
solsson Jul 31, 2017
4791068
New image uses config files instead of env
solsson Aug 1, 2017
86183d8
Runtime conf is logged at start, which documents host.name in case we…
solsson Aug 1, 2017
eb1a102
Once again be explicit (=unsurprising) about log config path
solsson Aug 1, 2017
f29e6eb
Logs, unlike docs, revealed that kafka-rest needs ...
solsson Aug 1, 2017
ac5c75a
Tests commands succeed now. Remains to add asserts.
solsson Aug 1, 2017
7b4cfb4
Merge pull request #54 from Yolean/addon-rest-new-build
solsson Aug 1, 2017
caf56c7
Looks like Schema Registry works with empty bootstrap servers
solsson Aug 1, 2017
ff69832
Demonstrates that rest+schemas work together
solsson Aug 1, 2017
edeca3d
Moves to an addon folder, kubectl apply -f addon-cp/
solsson Aug 7, 2017
4ee79a4
Stay online at update, even with only one replica
solsson Aug 7, 2017
2f946b2
Adds readiness probes
solsson Aug 7, 2017
c20adc0
Adds liveness probes identical to readiness
solsson Aug 7, 2017
3b6a11e
Confluent platform built from same java image as kafka:0.11.0.1
solsson Oct 3, 2017
59a2afa
Merge remote-tracking branch 'origin/addon-rest' into 1.8-confluent-rest
solsson Nov 29, 2017
3571677
Adapts existing addon to 3.0+ folder structure
solsson Nov 29, 2017
ed29106
Upgrades to 4.0.0
solsson Nov 29, 2017
1833c98
Manifests updated for Kubernetes 1.8
solsson Nov 29, 2017
465a505
Updates test manifests for 3.0+
solsson Dec 1, 2017
4d5815c
Adds CORS headers for REST Proxy, and clean up
solsson Dec 5, 2017
fe57b06
Maybe the CORS headers aren't necessary with PROXY=true
solsson Dec 5, 2017
6f4a0be
Schema Registry and REST Proxy are generic names, but Avro centric
solsson Dec 17, 2017
9686f2e
Renames services too; other registries and rest services may come
solsson Dec 17, 2017
8def797
Fixes the rename
solsson Dec 17, 2017
ec5eb7f
Maybe this port number isn't used, but it shouldn't differ from liste…
solsson Dec 17, 2017
2499e6b
Displays broker connectivity messages in logs
solsson Dec 17, 2017
e549959
Shows "Marking the coordinator x dead"
solsson Dec 17, 2017
4d26f51
Sample log config hides warnings, we don't
solsson Dec 18, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 43 additions & 0 deletions avro-tools/avro-tools-config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
kind: ConfigMap
metadata:
name: avro-tools-config
namespace: kafka
apiVersion: v1
data:
schema-registry.properties: |-
port=80
listeners=http://0.0.0.0:80
kafkastore.bootstrap.servers=PLAINTEXT://bootstrap.kafka:9092
kafkastore.topic=_schemas
debug=false

# https://github.com/Landoop/schema-registry-ui#prerequisites
access.control.allow.methods=GET,POST,PUT,OPTIONS
access.control.allow.origin=*

kafka-rest.properties: |-
#id=kafka-rest-test-server
listeners=http://0.0.0.0:80
bootstrap.servers=PLAINTEXT://bootstrap.kafka:9092
schema.registry.url=http://avro-schemas.kafka:80

# https://github.com/Landoop/kafka-topics-ui#common-issues
access.control.allow.methods=GET,POST,PUT,DELETE,OPTIONS
access.control.allow.origin=*

log4j.properties: |-
log4j.rootLogger=INFO, stdout

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c:%L)%n

log4j.logger.kafka=WARN, stdout
log4j.logger.org.apache.zookeeper=WARN, stdout
log4j.logger.org.apache.kafka=WARN, stdout
log4j.logger.org.I0Itec.zkclient=WARN, stdout
log4j.additivity.kafka.server=false
log4j.additivity.kafka.consumer.ZookeeperConsumerConnector=false

log4j.logger.org.apache.kafka.clients.Metadata=DEBUG, stdout
log4j.logger.org.apache.kafka.clients.consumer.internals.AbstractCoordinator=INFO, stdout
10 changes: 10 additions & 0 deletions avro-tools/rest-service.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
apiVersion: v1
kind: Service
metadata:
name: avro-rest
namespace: kafka
spec:
ports:
- port: 80
selector:
app: rest-proxy
46 changes: 46 additions & 0 deletions avro-tools/rest.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: avro-rest
namespace: kafka
spec:
replicas: 1
selector:
matchLabels:
app: rest-proxy
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
template:
metadata:
labels:
app: rest-proxy
spec:
containers:
- name: cp
image: solsson/kafka-cp@sha256:2797da107f477ede2e826c29b2589f99f22d9efa2ba6916b63e07c7045e15044
env:
- name: KAFKAREST_LOG4J_OPTS
value: -Dlog4j.configuration=file:/etc/kafka-rest/log4j.properties
command:
- kafka-rest-start
- /etc/kafka-rest/kafka-rest.properties
readinessProbe:
httpGet:
path: /
port: 80
livenessProbe:
httpGet:
path: /
port: 80
ports:
- containerPort: 80
volumeMounts:
- name: config
mountPath: /etc/kafka-rest
volumes:
- name: config
configMap:
name: avro-tools-config
10 changes: 10 additions & 0 deletions avro-tools/schemas-service.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
apiVersion: v1
kind: Service
metadata:
name: avro-schemas
namespace: kafka
spec:
ports:
- port: 80
selector:
app: schema-registry
47 changes: 47 additions & 0 deletions avro-tools/schemas.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: avro-schemas
namespace: kafka
spec:
replicas: 1
selector:
matchLabels:
app: schema-registry
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
template:
metadata:
labels:
app: schema-registry
spec:
containers:
- name: cp
image: solsson/kafka-cp@sha256:2797da107f477ede2e826c29b2589f99f22d9efa2ba6916b63e07c7045e15044
env:
- name: SCHEMA_REGISTRY_LOG4J_OPTS
value: -Dlog4j.configuration=file:/etc/schema-registry/log4j.properties
command:
- schema-registry-start
- /etc/schema-registry/schema-registry.properties
readinessProbe:
httpGet:
path: /
port: 80
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 60
ports:
- containerPort: 80
volumeMounts:
- name: config
mountPath: /etc/schema-registry
volumes:
- name: config
configMap:
name: avro-tools-config
43 changes: 43 additions & 0 deletions avro-tools/test/70rest-test1.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
apiVersion: batch/v1
kind: Job
metadata:
name: rest-test1
namespace: kafka
spec:
backoffLimit: 1
template:
metadata:
name: rest-test1
spec:
containers:
- name: curl
image: solsson/curl@sha256:523319afd39573746e8f5a7c98d4a6cd4b8cbec18b41eb30c8baa13ede120ce3
env:
- name: REST
value: http://rest.kafka.svc.cluster.local
- name: TOPIC
value: test1
command:
- /bin/bash
- -ce
- >
curl --retry 10 --retry-delay 30 --retry-connrefused -I $REST;

curl -H 'Accept: application/vnd.kafka.v2+json' $REST/topics;

curl --retry 10 -H 'Accept: application/vnd.kafka.v2+json' $REST/topics/test1;
curl -X POST -H "Content-Type: application/vnd.kafka.json.v2+json" -H "Accept: application/vnd.kafka.v2+json" --data "{\"records\":[{\"value\":\"Test from $HOSTNAME at $(date)\"}]}" $REST/topics/$TOPIC -v;
curl --retry 10 -H 'Accept: application/vnd.kafka.v2+json' $REST/topics/test2;

curl -X POST -H "Content-Type: application/vnd.kafka.json.v2+json" -H "Accept: application/vnd.kafka.v2+json" --data '{"records":[{"value":{"foo":"bar"}}]}' $REST/topics/$TOPIC -v;

curl -X POST -H "Content-Type: application/vnd.kafka.v2+json" --data '{"name": "my_consumer_instance", "format": "json", "auto.offset.reset": "earliest"}' $REST/consumers/my_json_consumer -v;

curl -X POST -H "Content-Type: application/vnd.kafka.v2+json" --data "{\"topics\":[\"$TOPIC\"]}" $REST/consumers/my_json_consumer/instances/my_consumer_instance/subscription -v;

curl -X GET -H "Accept: application/vnd.kafka.json.v2+json" $REST/consumers/my_json_consumer/instances/my_consumer_instance/records -v;

curl -X DELETE -H "Content-Type: application/vnd.kafka.v2+json" $REST/consumers/my_json_consumer/instances/my_consumer_instance -v;

sleep 300
restartPolicy: Never
186 changes: 186 additions & 0 deletions avro-tools/test/rest-curl.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,186 @@
---
kind: ConfigMap
metadata:
name: rest-curl
namespace: test-kafka
apiVersion: v1
data:

setup.sh: |-
touch /tmp/testlog

# Keep starting up until rest proxy is up and running
curl --retry 10 --retry-delay 30 --retry-connrefused -I -s $REST

curl -s -H 'Accept: application/vnd.kafka.v2+json' $REST/brokers | egrep '."brokers":.0'

curl -s -H 'Accept: application/vnd.kafka.v2+json' $REST/topics
echo ""

curl -s -H 'Accept: application/vnd.kafka.v2+json' $REST/topics/$TOPIC
echo ""

curl -X POST \
-H "Content-Type: application/vnd.kafka.json.v2+json" -H "Accept: application/vnd.kafka.v2+json" \
--data "{\"records\":[{\"value\":\"Test from $HOSTNAME at $(date -u -Iseconds)\"}]}" \
$REST/topics/$TOPIC
echo ""

curl -s -H 'Accept: application/vnd.kafka.v2+json' $REST/topics/$TOPIC/partitions
echo ""

curl -X POST \
-H "Content-Type: application/vnd.kafka.v2+json" \
--data '{"name": "my_consumer_instance", "format": "json", "auto.offset.reset": "earliest"}' \
$REST/consumers/my_json_consumer
echo ""

curl -X POST \
-H "Content-Type: application/vnd.kafka.v2+json" \
--data "{\"topics\":[\"$TOPIC\"]}" \
$REST/consumers/my_json_consumer/instances/my_consumer_instance/subscription \
-w "%{http_code}"
echo ""

curl -X GET \
-H "Accept: application/vnd.kafka.json.v2+json" \
$REST/consumers/my_json_consumer/instances/my_consumer_instance/records

curl -X DELETE \
-H "Content-Type: application/vnd.kafka.v2+json" \
$REST/consumers/my_json_consumer/instances/my_consumer_instance

# schema-registry

curl -X GET $SCHEMAS/subjects
echo ""

curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" \
--data '{"schema": "{\"type\": \"string\"}"}' \
$SCHEMAS/subjects/$TOPIC-key/versions
echo ""

curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" \
--data '{"schema": "{\"type\": \"string\"}"}' \
$SCHEMAS/subjects/$TOPIC-value/versions
echo ""

curl -X GET $SCHEMAS/schemas/ids/1
echo ""

curl -X GET $SCHEMAS/subjects/$TOPIC-value/versions/1
echo ""

# rest + schema
# TODO new topic needed because this breaks json consumer above

curl -X POST -H "Content-Type: application/vnd.kafka.avro.v2+json" \
-H "Accept: application/vnd.kafka.v2+json" \
--data '{"value_schema": "{\"type\": \"record\", \"name\": \"User\", \"fields\": [{\"name\": \"name\", \"type\": \"string\"}]}", "records": [{"value": {"name": "testUser"}}]}' \
$REST/topics/$TOPIC
echo ""

curl -X POST -H "Content-Type: application/vnd.kafka.v2+json" \
--data '{"name": "my_consumer_instance", "format": "avro", "auto.offset.reset": "earliest"}' \
$REST/consumers/my_avro_consumer
echo ""

curl -X POST -H "Content-Type: application/vnd.kafka.v2+json" \
--data "{\"topics\":[\"$TOPIC\"]}" \
$REST/consumers/my_avro_consumer/instances/my_consumer_instance/subscription

curl -X GET -H "Accept: application/vnd.kafka.avro.v2+json" \
$REST/consumers/my_avro_consumer/instances/my_consumer_instance/records

tail -f /tmp/testlog

continue.sh: |-
exit 0

run.sh: |-
exec >> /tmp/testlog
exec 2>&1

exit 0

---
apiVersion: batch/v1
kind: Job
metadata:
name: rest-curl
namespace: test-kafka
spec:
template:
spec:
containers:
- name: topic-create
image: solsson/kafka:1.0.0@sha256:17fdf1637426f45c93c65826670542e36b9f3394ede1cb61885c6a4befa8f72d
command:
- ./bin/kafka-topics.sh
- --zookeeper
- zookeeper.kafka.svc.cluster.local:2181
- --create
- --if-not-exists
- --topic
- test-rest-curl
- --partitions
- "1"
- --replication-factor
- "1"
restartPolicy: Never
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: rest-curl
namespace: test-kafka
spec:
replicas: 1
selector:
matchLabels:
test-target: kafka-confluent-rest
test-type: readiness
template:
metadata:
labels:
test-target: kafka-confluent-rest
test-type: readiness
spec:
containers:
- name: testcase
image: solsson/curl@sha256:523319afd39573746e8f5a7c98d4a6cd4b8cbec18b41eb30c8baa13ede120ce3
env:
- name: SCHEMAS
value: http://schemas.kafka.svc.cluster.local
- name: REST
value: http://rest.kafka.svc.cluster.local
- name: TOPIC
value: test-rest-curl
# Test set up
command:
- /bin/bash
- -e
- /test/setup.sh
# Test run, again and again
readinessProbe:
exec:
command:
- /bin/bash
- -e
- /test/run.sh
# We haven't worked on timing
periodSeconds: 60
# Test quit on nonzero exit
livenessProbe:
exec:
command:
- /bin/bash
- -e
- /test/continue.sh
volumeMounts:
- name: config
mountPath: /test
volumes:
- name: config
configMap:
name: rest-curl