Skip to content

Commit

Permalink
fix: fixing the call to train.py directly in python
Browse files Browse the repository at this point in the history
else we have issues with the versions
  • Loading branch information
bcm-at-zama committed Sep 27, 2024
1 parent bfcc6f0 commit a7fcee8
Show file tree
Hide file tree
Showing 7 changed files with 247 additions and 12 deletions.
57 changes: 50 additions & 7 deletions use_case_examples/deployment/breast_cancer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,57 @@ To run this example on AWS you will also need to have the AWS CLI properly setup
To do so please refer to [AWS documentation](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html).
One can also run this example locally using Docker, or just by running the scripts locally.

1. To train your model you can use `train.py`, or `train_with_docker.sh` to use Docker (recommended way).
#### On the developer machine:

1. To train your model you can
- use `train_with_docker.sh` to use Docker (recommended way),
- or, only if you know what you're doing and will manage synchronisation between versions, use `python train.py`

This will train a model and [serialize the FHE circuit](../../../docs/guides/client_server.md) in a new folder called `./dev`.
1. Once that's done you can use the script provided in Concrete ML in `use_case_examples/deployment/server/`, use `deploy_to_docker.py`.

- `python use_case_examples/deployment/server/deploy_to_docker.py --path-to-model ./dev`
#### On the server machine:

1. Copy the './dev' directory from the developer machine.
1. If you need to delete existing Dockers: `docker rm -f $(docker ps -a -q)`
1. Launch the server via:

```
python ../server/deploy_to_docker.py --path-to-model ./dev
```

You will finally see some

> INFO: Uvicorn running on http://0.0.0.0:5000 (Press CTRL+C to quit)
which means the server is ready to server, on Port 5000.

#### On the client machine:

##### If you go for a Docker part on the client side:

1. Launch the `build_docker_client_image.py` to build a client Docker image.
1. Run the client with `client.sh` script. This will run the container in interactive mode.
1. Then, in this Docker, you can launch the client script to interact with the server:

```
URL="<my_url>" python client.py
```

where `<my_url>` is the content of the `url.txt` file (if you don't set URL, the default is `0.0.0.0`; this defines the IP to use when running server in Docker on localhost).

#### If you go for client side done in Python:

1. Prepare the client side:

```
python3.8 -m venv .venvclient
source .venvclient/bin/activate
pip install -r client_requirements.txt
```
1. Run the client script:

3. Once that's done you can launch the `build_docker_client_image.py` script to build a client Docker image.
1. You can then run the client by using the `client.sh` script. This will run the container in interactive mode.
To interact with the server you can launch the `client.py` script using `URL="<my_url>" python client.py` where `<my_url>` is the content of the `url.txt` file (default is `0.0.0.0`, ip to use when running server in Docker on localhost).
```
URL="http://localhost:8888" python client.py
```

And here it is you deployed a Concrete ML model and ran an inference using Fully Homormophic Encryption.
And here it is! Whether you use Docker or Python for the client side, you deployed a Concrete ML model and ran an inference using Fully Homormophic Encryption. In particular, you can see that the FHE predictions are correct.
13 changes: 11 additions & 2 deletions use_case_examples/deployment/breast_cancer/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@

from concrete.ml.deployment import FHEModelClient

URL = os.environ.get("URL", f"http://localhost:5000")
URL = os.environ.get("URL", f"http://localhost:8888")
STATUS_OK = 200
ROOT = Path(__file__).parent / "client"
ROOT.mkdir(exist_ok=True)
Expand Down Expand Up @@ -105,4 +105,13 @@
encrypted_result = result.content
decrypted_prediction = client.deserialize_decrypt_dequantize(encrypted_result)[0]
decrypted_predictions.append(decrypted_prediction)
print(decrypted_predictions)
print(f"Decrypted predictions are: {decrypted_predictions}")

decrypted_predictions_classes = numpy.array(decrypted_predictions).argmax(axis=1)
print(f"Decrypted prediction classes are: {decrypted_predictions_classes}")

# Let's check the results and compare them against the clear model
clear_prediction_classes = y[0:10]
accuracy = (clear_prediction_classes == decrypted_predictions_classes).mean()
print(f"Accuracy between FHE prediction and expected results is: {accuracy*100:.0f}%")

Empty file modified use_case_examples/deployment/breast_cancer/client.sh
100644 → 100755
Empty file.
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
grequests
requests
tqdm
numpy
scikit-learn
concrete-ml
178 changes: 178 additions & 0 deletions use_case_examples/deployment/breast_cancer/client_via_tfhe-rs.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,178 @@
"""Client script.
This script does the following:
- Query crypto-parameters and pre/post-processing parameters
- Quantize the inputs using the parameters
- Encrypt data using the crypto-parameters
- Send the encrypted data to the server (async using grequests)
- Collect the data and decrypt it
- De-quantize the decrypted results
"""

import io
import os
from pathlib import Path

import grequests
import numpy
import requests
from sklearn.datasets import load_breast_cancer
from tqdm import tqdm

from concrete.ml.deployment import FHEModelClient

URL = os.environ.get("URL", f"http://localhost:8888")
STATUS_OK = 200
ROOT = Path(__file__).parent / "client"
ROOT.mkdir(exist_ok=True)

encrypt_with_tfhe = False
nb_samples = 10

def to_tuple(x) -> tuple:
"""Make the input a tuple if it is not already the case.
Args:
x (Any): The input to consider. It can already be an input.
Returns:
tuple: The input as a tuple.
"""
# If the input is not a tuple, return a tuple of a single element
if not isinstance(x, tuple):
return (x,)

return x

def serialize_encrypted_values(
*values_enc,
):
"""Serialize encrypted values.
If a value is None, None is returned.
Args:
values_enc (Optional[fhe.Value]): The values to serialize.
Returns:
Union[Optional[bytes], Optional[Tuple[bytes]]]: The serialized values.
"""
values_enc_serialized = tuple(
value_enc.serialize() if value_enc is not None else None for value_enc in values_enc
)

if len(values_enc_serialized) == 1:
return values_enc_serialized[0]

return values_enc_serialized

if __name__ == "__main__":
# Get the necessary data for the client
# client.zip
zip_response = requests.get(f"{URL}/get_client")
assert zip_response.status_code == STATUS_OK
with open(ROOT / "client.zip", "wb") as file:
file.write(zip_response.content)

# Get the data to infer
X, y = load_breast_cancer(return_X_y=True)
assert isinstance(X, numpy.ndarray)
assert isinstance(y, numpy.ndarray)
X = X[-nb_samples:]
y = y[-nb_samples:]

assert isinstance(X, numpy.ndarray)
assert isinstance(y, numpy.ndarray)

# Create the client
client = FHEModelClient(path_dir=str(ROOT.resolve()), key_dir=str((ROOT / "keys").resolve()))

# The client first need to create the private and evaluation keys.
serialized_evaluation_keys = client.get_serialized_evaluation_keys()

assert isinstance(serialized_evaluation_keys, bytes)

# Evaluation keys can be quite large files but only have to be shared once with the server.

# Check the size of the evaluation keys (in MB)
print(f"Evaluation keys size: {len(serialized_evaluation_keys) / (10**6):.2f} MB")

# Send this evaluation key to the server (this has to be done only once)
# send_evaluation_key_to_server(serialized_evaluation_keys)

# Now we have everything for the client to interact with the server

# We create a loop to send the input to the server and receive the encrypted prediction
execution_time = []
encrypted_input = None
clear_input = None

# Update all base64 queries encodings with UploadFile
response = requests.post(
f"{URL}/add_key", files={"key": io.BytesIO(initial_bytes=serialized_evaluation_keys)}
)
assert response.status_code == STATUS_OK
uid = response.json()["uid"]

inferences = []
# Launch the queries
for i in tqdm(range(len(X))):
clear_input = X[[i], :]

assert isinstance(clear_input, numpy.ndarray)

quantized_input = to_tuple(client.model.quantize_input(clear_input))

# Here, we can encrypt with TFHE-rs instead of Concrete
if encrypt_with_tfhe:
pass
else:
encrypted_input = to_tuple(client.client.encrypt(*quantized_input))

encrypted_input = serialize_encrypted_values(*encrypted_input)

# Debugging
if False:
print(f"Clear input: {clear_input}")
print(f"Quantized input: {quantized_input}")
print(f"Quantized input: {encrypted_input}")

assert isinstance(encrypted_input, bytes)

inferences.append(
grequests.post(
f"{URL}/compute",
files={
"model_input": io.BytesIO(encrypted_input),
},
data={
"uid": uid,
},
)
)

# Unpack the results
decrypted_predictions = []
for result in grequests.map(inferences):
if result is None:
raise ValueError("Result is None, probably due to a crash on the server side.")
assert result.status_code == STATUS_OK

encrypted_result = result.content

# Decrypt and deserialize the values
result_quant = to_tuple(client.deserialize_decrypt(encrypted_result))
result = to_tuple(client.model.dequantize_output(*result_quant))
decrypted_prediction = client.model.post_processing(*result)[0]

decrypted_predictions.append(decrypted_prediction)
print(f"Decrypted predictions are: {decrypted_predictions}")

decrypted_predictions_classes = numpy.array(decrypted_predictions).argmax(axis=1)
print(f"Decrypted prediction classes are: {decrypted_predictions_classes}")

# Let's check the results and compare them against the clear model
clear_prediction_classes = y[0:nb_samples]
accuracy = (clear_prediction_classes == decrypted_predictions_classes).mean()
print(f"Accuracy between FHE prediction and expected results is: {accuracy*100:.0f}%")

2 changes: 1 addition & 1 deletion use_case_examples/deployment/breast_cancer/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,4 @@
model.fit(X_train, y_train)
model.compile(X_train)
dev = FHEModelDev("./dev", model)
dev.save()
dev.save(via_mlir=True)
6 changes: 4 additions & 2 deletions use_case_examples/deployment/server/deploy_to_docker.py
Original file line number Diff line number Diff line change
Expand Up @@ -97,11 +97,13 @@ def main(path_to_model: Path, image_name: str):
if args.only_build:
return

PORT_TO_CHOOSE=8888

# Run newly created Docker server
try:
with open("./url.txt", mode="w", encoding="utf-8") as file:
file.write("http://localhost:5000")
subprocess.check_output(f"docker run -p 5000:5000 {image_name}", shell=True)
file.write(f"http://localhost:{PORT_TO_CHOOSE}")
subprocess.check_output(f"docker run -p {PORT_TO_CHOOSE}:5000 {image_name}", shell=True)
except KeyboardInterrupt:
message = "Terminate container? (y/n) "
shutdown_instance = input(message).lower()
Expand Down

0 comments on commit a7fcee8

Please sign in to comment.