zs3server provides a no-code s3-compatible decentralized storage server on Züs allocation using a minio-gateway interface.
- Züs Overview
- Architecture
- Building zs3 server
- Running zs3 server
- Test MiniO client
- Configure Minio client
Züs is a high-performance cloud on a fast blockchain offering privacy and configurable uptime. It is an alternative to traditional cloud S3 and has shown better performance on a test network due to its parallel data architecture. The technology uses erasure code to distribute the data between data and parity servers. Züs storage is configurable to provide flexibility for IT managers to design for desired security and uptime, and can design a hybrid or a multi-cloud architecture with a few clicks using Blimp's workflow, and can change redundancy and providers on the fly.
For instance, the user can start with 10 data and 5 parity providers and select where they are located globally, and later decide to add a provider on-the-fly to increase resilience, performance, or switch to a lower cost provider.
Users can also add their own servers to the network to operate in a hybrid cloud architecture. Such flexibility allows the user to improve their regulatory, content distribution, and security requirements with a true multi-cloud architecture. Users can also construct a private cloud with all of their own servers rented across the globe to have a better content distribution, highly available network, higher performance, and lower cost.
The QoS protocol is time-based where the blockchain challenges a provider on a file that the provider must respond within a certain time based on its size to pass. This forces the provider to have a good server and data center performance to earn rewards and income.
The privacy protocol from Züs is unique where a user can easily share their encrypted data with their business partners, friends, and family through a proxy key sharing protocol, where the key is given to the providers, and they re-encrypt the data using the proxy key so that only the recipient can decrypt it with their private key.
Züs has ecosystem apps to encourage traditional storage consumption such as Blimp, a S3 server and cloud migration platform, and Vult, a personal cloud app to store encrypted data and share privately with friends and family, and Chalk, a zero upfront cost permanent storage solution for NFT artists.
Other apps are Bolt, a wallet that is very secure with air-gapped 2FA split-key protocol to prevent hacks from compromising your digital assets, and it enables you to stake and earn from the storage providers; Atlus, a blockchain explorer and Chimney, which allows anyone to join the network and earn using their server or by just renting one, with no prior knowledge required.
There are three main components that will be installed in the customer server.
-
ZS3Server is the main component that will communicate directly with Züs storage.
-
LogSearch API is the log component that will store the audit log from the S3 server and it will be consumed using ZS3 API
-
MinioClient is the component that will communicate directly to the zs3server and it is protected using access and secret key.
Prerequisites to run MinIO ZCN gateway:
- A wallet.json created using zwalletcli
- Config.yaml
- An allocation.txt created using zboxcli.
- A zs3server.json option file to configure encryption and compress options
git clone [email protected]:0chain/zs3server.git
cd zs3server
go mod tidy
go build .
export MINIO_ROOT_USER=someminiouser
export MINIO_ROOT_PASSWORD=someminiopassword
./minio gateway zcn --configDir /path/to/config/dir
Note: allocation and configDir both are optional. By default configDir takes ~/.zcn as configDir and if allocation is not provided in command then it will look for allocation.txt file in configDir directory.
If you want to debug on local you might want to build with
-gcflags="all=-N -l"
flag to view all the objects during debugging.
-
To build and run the minio server component you need to install docker.
-
Run the docker-compose command inside the zs3server directory./
docker-compose -f environment/docker-compose.yaml up -d
-
Make sure allocation.txt file exist in the default folder
~/.zcn
-
Now you can interact with the clint API follow this doc
-
You can also interact with the logsearch API by following this doc
Install from here: https://aws.amazon.com/cli/
Fetch the access key and secret from your deployed zs3server. To configure aws cli
, type aws configure
and
specify the zs3server key information like below:
aws configure
AWS Access Key ID [None]: miniouser
AWS Secret Access Key [None]: miniopassword
Default region name [None]: us-east-1
Default output format [None]: ENTER
Additionally enable AWS Signature Version ‘4’ for zs3server.
aws configure set default.s3.signature_version s3v4
aws --endpoint-url https://localhost:9000 s3 ls
2016-03-27 02:06:30 deebucket
2016-03-28 21:53:49 guestbucket
2016-03-29 13:34:34 mbtest
2016-03-26 22:01:36 mybucket
2016-03-26 15:37:02 testbucket
aws --endpoint-url https://localhost:9000 s3 ls s3://mybucket
2016-03-30 00:26:53 69297 argparse-1.2.1.tar.gz
2016-03-30 00:35:37 67250 simplejson-3.3.0.tar.gz
aws --endpoint-url https://localhost:9000 s3 mb s3://mybucket
make_bucket: s3://mybucket/
aws --endpoint-url https://localhost:9000 s3 cp simplejson-3.3.0.tar.gz s3://mybucket
upload: ./simplejson-3.3.0.tar.gz to s3://mybucket/simplejson-3.3.0.tar.gz
aws --endpoint-url https://localhost:9000 s3 rm s3://mybucket/argparse-1.2.1.tar.gz
delete: s3://mybucket/argparse-1.2.1.tar.gz
aws --endpoint-url https://localhost:9000 s3 rb s3://mybucket
remove_bucket: s3://mybucket/
mc
provides a modern alternative to UNIX commands such as ls, cat, cp, mirror, diff etc. It supports filesystems
and Amazon S3 compatible cloud storage services.
Install from here for your os: https://min.io/docs/minio/macos/index.html
mc alias set zcn http://localhost:9000 miniouser miniopassword --api S3v2
mc ls zcn/
2016-03-27 02:06:30 deebucket
2016-03-28 21:53:49 guestbucket
2016-03-29 13:34:34 mbtest
2016-03-26 22:01:36 mybucket
2016-03-26 15:37:02 testbucket
mc ls zcn/mybucket
2016-03-30 00:26:53 69297 argparse-1.2.1.tar.gz
2016-03-30 00:35:37 67250 simplejson-3.3.0.tar.gz
mc mb zcn/mybucket
make_bucket: zcn/mybucket
mc cp simplejson-3.3.0.tar.gz zcn/mybucket
upload: ./simplejson-3.3.0.tar.gz to zcn/mybucket/simplejson-3.3.0.tar.gz
mc rm zcn/mybucket/argparse-1.2.1.tar.gz
delete: zcn/mybucket/argparse-1.2.1.tar.gz
mc rb zcn/mybucket
remove_bucket: zcn/mybucket/
Check mc --help
for the exhaustive list of cmds available.
- Add the following authorization settings
- The
AccessKey
would be the MINIO_ROOT_USER which you set earlier during zs3server deployment andSecretKey
would be the MINIO_ROOT_PASSWORD. - If you do not want to share the MINIO_ROOT_USER and MINIO_ROOT_PASSWORD, you can also create a user from minio console and share their access key and secret instead.
- Use the REST APIs to interact with the server.
- Postman collection for the same is provided below: Postman Collection
To setup replication, you need to have two zs3servers. For running two zs3servers on the same machine, you will need to copy the contents of .zcn folder to .zcn2 folder and change allocation.txt and zs3server.json accordingly. For changes in docker-compose.yaml, you can refer to the following example: docker-compose.yaml
- Configure both the zs3servers using minio client.
mc alias set primary http://<HOST_IP>:9000 miniouser miniopassword --api S3v2
mc alias set secondary http://<HOST_IP>:9002 miniouser miniopassword --api S3v2
- Set up replication using the following command, for more details refer to mc mirror command.
./mc mirror primary/<BUCKET_PREFIX>/ secondary/<BUCKET_PREFIX>/ --remove --watch
Disaster recovery is process of replicating data from secondary to primary in case of primary failure. To set up disaster recovery, you need to have replication setup between primary and secondary zs3servers. In case of primary failure, you can use the following command to sync data from secondary to primary.
./mc mirror secondary/<BUCKET_NAME>/ primary/<BUCKET_NAME/ --summary
To enable encryption and compression, you need to provide the encryption and compression options in the zs3server.json file under .zcn folder. For example:
{
"encrypt": true,
"compress": true,
}
The server will batch upload requests for objects which are uploaded using put api and has a defined content length. Max batch size refers to number of objects max objects to upload in one batch, this number should be similar to concurrency or thread set in client or expected number of requests per seconds, batch wait time will wait for this much amount of time before finalizing a batch and uploading it, number of batch workers will determine how many batches can we upload concurrently. For example:
{
"max_batch_size": 25, // set same as concurrency set via rclone or client
"batch_wait_time": 500, // batch wait time is set in milliseconds, its the time the processor will wait for more operations if the batch size is not reached
"batch_workers": 5, // number of workers, can be increased based on the number of requests, total operations which can be processed concurrently will be max_batch_size * batch_workers
"max_concurrent_request": 100 // max concurrent requests initiated by the server to blobbers
}
The server will upload and download objects concurrently based on the number of workers set in the configuration file. The number of workers can be set in the zs3server.json file under .zcn folder,by default upload workers are set to 4 and download workers to 6. For example:
{
"upload_workers": 4,
"download_workers": 6
}
The server can be mounted as a file system using s3fuse. The server can be mounted using the following commands:
Use the following command to create a password file:
echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > ${HOME}/.passwd-s3fs
chmod 600 ${HOME}/.passwd-s3fs
Mount the server using the following command:
s3fs mybucket /path/to/mountpoint -o passwd_file=${HOME}/.passwd-s3fs -o url=https://url.to.s3/ -o use_path_request_style,allow_other,umask=000,complement_stat
If you are using compression, we recommeng using our minio client Minio Client