Skip to content

Commit

Permalink
Improve docs (#6)
Browse files Browse the repository at this point in the history
  • Loading branch information
atemate authored Jul 18, 2023
1 parent 9f0ecae commit 84169f3
Show file tree
Hide file tree
Showing 4 changed files with 35 additions and 24 deletions.
37 changes: 20 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,7 @@
# case-change-machine
Test task: REST API for money change machine.

See [src/chg-package/README.md](src/chg-package/README.md) for algorithm implementation details.

# REST API calculating coins for change
- See algorithm requirements and implementation details in [src/chg-package/README.md](src/chg-package/README.md).
- See REST API specification in [src/chg-service/README.md](src/chg-service/README.md)
![images/kibana-2.png](images/kibana-2.png)

## Development
- Requirements:
Expand All @@ -19,10 +18,14 @@ make install
make unit-tests
```

## Standalone server
### Run as standalone server
- Run in docker-compose:
```
make run-docker
docker-compose -f ./docker-compose-single.yaml up --build
[+] Building 1.9s (8/8) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 32B
...
^C
```
Expand Down Expand Up @@ -57,7 +60,7 @@ Cleanup:
make run-docker
```

## Deployment with log collection
### Deployment with log collection
As the last part of the task we were asked to persist the history of transactions in order to be able to control if everything works properly:
- After the payment transaction is processed by the REST API, but before returning the result, create and dispatch a new event to notify about the
transaction.
Expand All @@ -70,7 +73,7 @@ Obviously, semi-structured json payloads must be stored in a no-SQL database opt
The main question is the transport for the messages.


### Solution 1: (async) Message Queues
#### Solution 1: (async) Message Queues
First solution that comes to mind is to connect the service with a database via a message queue (RabbitMQ, SQS, PubSub), task queue (Celery) or a streaming service (Kafka, Kinesis).
The latter services can sometimes store messages for some period and offer SQL-like functionality to query historical data for analytics, however this comes at higher processing costs.

Expand All @@ -87,12 +90,12 @@ Disadvantages:
- definitely an overkill for just logging (no need low latency for analytics)


### Solution 2: (sync) Separate custom REST API or DB directly
#### Solution 2: (sync) Separate custom REST API or DB directly
Alternatively, one could implement a separate custom REST API microservice that would receive a payload and write it to a DB.
In a simpler setup, one could even connect the main service directly to the database. Both cases would solve the logging issue, however it would be tricky to scale such solution if needed, and also to build it robust and responsive.


### Solution 3: (async) Log aggregation tool
#### Solution 3: (async, chosen) Log aggregation tool
The most natural solution for logging the payloads and processing results is to process the server logs.
It could be done with a centralised logging service that would run lightweigh sidecar containers next to each microservice's pod and collect the log entries (possibly with some time delay), and offer righ analytics functionality. Such solutions are: Logstash or Filebeat(+Elasticsearch +Kibana) or fluentd (+for example DataDog).

Expand All @@ -106,7 +109,7 @@ The beauty of this solution is that:
- is scalable and robust,
- when needed, can still be connected to message queues or more advanced log processors (for example to persist logs to other places)

[Filebeat diagram from the [official docs](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-overview.html)](./images/filebeat.png)
![Filebeat diagram from the [official docs](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-overview.html)](./images/filebeat.png)

In our setup, the data is **persisted to docker volumes**, which can be replaced with K8s PersistentVolumes in a production setup.
For my knowledge, ELK-based logging system does not provide persistance other than log files, however, as mentioned above, ELK can be easily connected to other databases via message queues.
Expand Down Expand Up @@ -143,24 +146,24 @@ $ make run-load-test
...
(press ^C to abort and kill all containers)
```
[images/locust-1.png](images/locust1.png)
![images/locust-1.png](images/locust-1.png)

- Open http://0.0.0.0:8089/ and click "Start swarming", this will start the load test:
[images/locust-2.png](images/locust2.png)
![images/locust-2.png](images/locust-2.png)

- In another terminal, run `make setup-kibana` to load the pre-configured the Kibana dashboard.

- Run `make open-kibana` to open Kibana in browser:
[images/kibana-1.png](images/kibana-1.png)
![images/kibana-1.png](images/kibana-1.png)

- Open the Dashboard where we display some of real-time metrics (such as distributions of returned coins, total amounts of coins or EUR, etc):
[images/kibana-2.png](images/kibana-2.png)
![images/kibana-2.png](images/kibana-2.png)

- Find "Discover" section and get access to the individual payloads:
[images/kibana-3.png](images/kibana-3.png)
![images/kibana-3.png](images/kibana-3.png)

- Note also how an analyst can do filtering on the values:
[images/kibana-4.png](images/kibana-4.png)
![images/kibana-4.png](images/kibana-4.png)


- Press ^C in both terminals to cancel docker-compose, cleanup:
Expand Down
Binary file added images/openapi.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
9 changes: 5 additions & 4 deletions src/chg-package/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Internals package for the Change Machine
# Implementation package for the Change Machine

## Requirements

Expand Down Expand Up @@ -26,7 +26,7 @@ If the service is to be scaled for many currywurst machines:
- scalable ?


### Ideas for improvements
### Ideas for future improvements
- allow giving out banknotes as change
- stateful (has limited amount of coins and banknotes)
- forbids accepting several banknotes (e.g. €500)
Expand All @@ -39,9 +39,10 @@ Note that as the change-making problem is weakly NP-hard (depends on the currenc

## Algorithms
The task is the change-making problem, a special case of the integer knapsack problem, is weakly NP-hard.
A simple greedy algorithm finds not-always-optimal solution, but is good enough to cover most of the typical cases.

### Greedy method
See implementation in `greedy_change()`.
See implementation `greedy_change()` in [src/chg-package/change_machine_package/algorithms.py](src/chg-package/change_machine_package/algorithms.py)

The idea is to keep selecting the largest denomination coins/notes available to represent a given amount of money, gradually reducing the amount until it reaches zero. The greedy method is already optimal for the Euro currency, but might give non-optimal solutions for other currencies.

Expand All @@ -51,7 +52,7 @@ However, it is important to note that the greedy strategy may not always provide


### Dynamic programming method
See implementation in `dynamic_programming_change()`.
See implementation `dynamic_programming_change()` in [src/chg-package/change_machine_package/algorithms.py](src/chg-package/change_machine_package/algorithms.py)

The algorithm uses dynamic programming to find the minimal number of coins needed to make change for a given amount:
- it constructs a matrix where each cell represents the minimal number of coins required to make change for a specific amount using available coin denominations.
Expand Down
13 changes: 10 additions & 3 deletions src/chg-service/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# REST API service for Change Machine
# REST API service for the Change Machine

### Usage:
### API specification
```
$ make run-local
poetry run uvicorn change_machine_service.api:app --reload
Expand Down Expand Up @@ -44,9 +44,16 @@ $ curl 'localhost:8000/api/v1/pay?eur_inserted=10&currywurst_price_eur=4.9' | jq
}
```

For more complete API specification please check http://127.0.0.1:8000/api/v1/docs
![../../images/openapi.png](../../images/openapi.png)

### Configuration
See [src/chg-service/change_machine_service/settings.py](src/chg-service/change_machine_service/settings.py), which can be overloaded using environment variables.
See [src/chg-service/change_machine_service/settings.py](src/chg-service/change_machine_service/settings.py), which can be overloaded using environment variables with the server restart.
For example:
- specify the change-computation algorithm: `export CHG_ALGORITHM="greedy_search"`
- specify log file path (defaults to none): `export SRV_LOG_FILE="/path/to/log.ndjson"`


### Load testing
Reports not recorded, but when tested in local docker, the server holds ~300 rps with latency ~50 ms:
![../../images/locust-2.png](../../images/locust-2.png)

0 comments on commit 84169f3

Please sign in to comment.