Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rclone improvements #366

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion app/dcs/buckets/object-listings/page.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,5 +49,5 @@ Avoid using access grants or S3 credentials with different path encryption setti
{% /callout %}

{% callout type="info" %}
The [](docId:4oDAezF-FcfPr0WPl7knd) in the Satellite Console cannot list objects with unencrypted object keys yet. If you try to open a bucket with such objects, you'll see it empty with a message "You have objects locked with a different passphrase". Support for unencrypted object keys in the Object Browser will be added in a future release. Until then, you can use the [](docId:TC-N6QQVQg8w2cRqvEqEf) or a S3-compatible app to list such objects.
The [](docId:4oDAezF-FcfPr0WPl7knd) in the Satellite Console cannot list objects with unencrypted object keys yet. If you try to open a bucket with such objects, you'll see it empty with a message "You have objects locked with a different passphrase". Support for unencrypted object keys in the Object Browser will be added in a future release. Until then, you can use the [](docId:TC-N6QQVQg8w2cRqvEqEf) or an S3-compatible app to list such objects.
{% /callout %}
8 changes: 4 additions & 4 deletions app/dcs/third-party-tools/file-transfer-performance/page.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ So, for the purposes of demonstrating uploads and downloads for many smaller fil

When working with small and medium-sized files, the optimal parallelism is limited by the segment, or "chunk", size. With [](docId:WayQo-4CZXkITaHiGeQF_), this segmentation is referred to as "concurrency." So, for example, a 1GB file would be optimally uploaded to Storj with the following command:

```Text
```bash
rclone copy --progress --s3-upload-concurrency 16 --s3-chunk-size 64M 1gb.zip remote:bucket
```

Expand All @@ -60,7 +60,7 @@ For example, a 10GB file could theoretically be transferred with 160 concurrency

Rclone also offers the advantage of being able to transfer multiple files in parallel with the `--transfers` flag. For example, multiple 1GB files could be transferred simultaneously with this command, modified from the single file example above:

```Text
```bash
rclone copy --progress --transfers 4 --s3-upload-concurrency 16 --s3-chunk-size 64M 1gb.zip remote:bucket
```

Expand All @@ -72,7 +72,7 @@ The relationship of constant chunk size to variable file size is the determining

The same basic mathematical calculations for uploads are also relevant for downloads. However, since the Uplink CLI supports parallelism with downloads, it is often the better choice for performance. This can be achieved using the `--parallelism` flag, as shown below:

```Text
```bash
uplink cp sj://bucket/bighugefile.zip ~/Downloads/bighugefile.zip --parallelism 4
```

Expand All @@ -82,7 +82,7 @@ Because Uplink bypasses the Storj edge network layer, this is the best option fo

With small files, [](docId:Mk51zylAE6xmqP7jUYAuX) is still the best option to use for downloads as well. This is again thanks to the `--transfers` flag that allows Rclone to download multiple files in parallel, taking advantage of concurrency even when files are smaller than the Storj segment size. To download 10 small files at once with Rclone, the command would be:

```Text
```bash
rclone copy --progress --transfers 10 remote:bucket /tmp
```

Expand Down
17 changes: 12 additions & 5 deletions app/dcs/third-party-tools/rclone/page.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,24 +15,31 @@ metadata:

Follow the [Getting Started guide](docId:AsyYcUJFbO1JI8-Tu8tW3) to setup Rclone.

The following is more details about the 2 ways you can use Rclone with Storj.
There are 2 ways to use Rclone with Storj:

1. **S3 Compatible:** Connect to the Storj network via the S3 protocol/S3 gateway.
2. **Native:** Connect over the Storj protocol to access your bucket.

## S3 Compatible

Use our [S3 compatible API](docId:eZ4caegh9queuQuaazoo) to increase upload performance and reduce the load on your systems and network. A 1GB upload will result in only 1GB of data being uploaded
Use our [S3 compatible API](docId:eZ4caegh9queuQuaazoo) to increase upload performance and reduce the load on your systems and network. A 1GB upload will result in only 1GB of data being uploaded.

- Faster upload
- Reduction in network load
- Server-side encryption

[See common commands](docId:AsyYcUJFbO1JI8-Tu8tW3) to get started!

## Native

Use our native integration pattern to take advantage of client-side encryption as well as to achieve the best possible download performance. Uploads will be erasure-coded [](docId:Pksf8d0TCLY2tBgXeT18d), thus a 1GB upload will result in 2.68GB of data being uploaded to storage nodes across the network.
Use our native Rclone integration to take advantage of client-side encryption, and to achieve the best possible download performance. Note that uploads will be erasure-coded locally [](docId:Pksf8d0TCLY2tBgXeT18d); thus, uploading a 1GB file will result in 2.68GB uploaded data out of your network (to storage nodes across the network).

- End-to-end encryption
- Faster download speed

[See common commands](docId:Mk51zylAE6xmqP7jUYAuX) to get started!

{% quick-links %}
{% quick-link title="Rclone S3 compatible" href="docId:AsyYcUJFbO1JI8-Tu8tW3" /%}
{% quick-link title="Rclone native" href="docId:Mk51zylAE6xmqP7jUYAuX" /%}
{% quick-link title="Rclone - S3 Compatible" href="docId:AsyYcUJFbO1JI8-Tu8tW3" /%}
{% quick-link title="Rclone - Native" href="docId:Mk51zylAE6xmqP7jUYAuX" /%}
{% /quick-links %}
12 changes: 7 additions & 5 deletions app/dcs/third-party-tools/rclone/rclone-native/page.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,18 +13,20 @@ metadata:

## Selecting an Integration Pattern

Use our native integration pattern to take advantage of client-side encryption as well as to achieve the best possible download performance. Uploads will be erasure-coded locally, thus a 1GB upload will result in 2.68GB of data being uploaded to storage nodes across the network.
Use our native Rclone integration to take advantage of client-side encryption, and to achieve the best possible download performance. Note that uploads will be erasure-coded locally [](docId:Pksf8d0TCLY2tBgXeT18d); thus, uploading a 1GB file will result in 2.68GB uploaded data out of your network (to storage nodes across the network).

## Use this pattern for
Use this pattern (native integration) for:

- The strongest security
- The best download speeds

Alternatively, you can use the [S3 compatible integration](docId:eZ4caegh9queuQuaazoo) with Rclone to increase upload performance and reduce the load on your systems and network.

## Setup

First, [Download](https://rclone.org/downloads/) and extract the rclone binary onto your system.
First, [download rclone](https://rclone.org/downloads/) and extract the rclone binary onto your system.

Execute the config command:
Execute the config command to setup a new Storj "remote" configuration:

```bash
rclone config
Expand Down Expand Up @@ -179,4 +181,4 @@ q) Quit config
e/n/d/r/c/s/q> q
```

For additional commands you can do, see [](docId:WayQo-4CZXkITaHiGeQF_).
For a listing of Rclone commands for general use, see [](docId:WayQo-4CZXkITaHiGeQF_).
37 changes: 29 additions & 8 deletions app/dcs/third-party-tools/rclone/rclone-s3/page.md
Original file line number Diff line number Diff line change
@@ -1,28 +1,28 @@
---
title: Rclone additional commands
title: Rclone Commands
docId: WayQo-4CZXkITaHiGeQF_
redirects:
- /dcs/how-tos/sync-files-with-rclone/rclone-with-hosted-gateway
metadata:
title: Rclone with S3 Compatibility Guide
description: Step-by-step guide to configure Rclone pointed to Storj's S3 compatible API, providing better upload performance and lower network load.
title: Rclone Command Guide
description: Step-by-step guide to use Rclone with common commands.
---

{% callout type="info" %}
Follow the [Getting Started guide](docId:AsyYcUJFbO1JI8-Tu8tW3) to setup Rclone.
{% /callout %}

The follow are additional commands or options you can consider when using Rclone
The follow are additional commands and options you can consider when using Rclone.

## Configuration password
## Configuration Password

For additional security, you should consider using the `s) Set configuration password` option. It will encrypt the `rclone.conf` configuration file. This way secrets like the [](docId:OXSINcFRuVMBacPvswwNU), the encryption passphrase, and the access grant can't be easily stolen.
For additional security, you should consider using the `s) Set configuration password` option. It will encrypt the `rclone.conf` configuration file. This way, secrets like the [](docId:OXSINcFRuVMBacPvswwNU), the encryption passphrase, and the access grant can't be easily stolen.

## Create a Bucket

Use the `mkdir` command to create new bucket, e.g., `mybucket`.

```yaml
```bash
rclone mkdir waterbear:mybucket
```

Expand Down Expand Up @@ -162,8 +162,29 @@ Or between two Storj buckets.
rclone sync --progress waterbear-us:mybucket/videos/ waterbear-europe:mybucket/videos/
```

Or even between another cloud storage and Storj.
Or even between another cloud storage (e.g., an AWS S3 connection names `s3`) and Storj.

```bash
rclone sync --progress s3:mybucket/videos/ waterbear:mybucket/videos/
```


## Mounting a Bucket

Use the `mount` command to mount a bucket to a folder (Mac and Linux only). When mounted, you can use the bucket as a local folder.

```bash
sudo mkdir /mnt/mybucket
sudo chown $USER: /mnt/mybucket
rclone mount waterbear:mybucket /mnt/mybucket --vfs-cache-mode full
```

{% callout type="info" %}
The `--vfs-cache-mode full` flag means that all reads and writes are cached to disk. Without it, reads and writes are done directly to the Storj bucket.
{% /callout %}

To unmount the bucket, use the `umount` command.

```bash
umount /mnt/mybucket
```
2 changes: 1 addition & 1 deletion app/dcs/third-party-tools/s3fs/page.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,4 +87,4 @@ Now you can use the mounted bucket almost as any folder.

We recommend having a look at [](docId:LdrqSoECrAyE_LQMvj3aF) and its [`rclone mount` command](https://rclone.org/commands/rclone_mount/) as well.

Please note - you can configure a native connector in rclone, (see: [](docId:Mk51zylAE6xmqP7jUYAuX)) and use [](docId:Pksf8d0TCLY2tBgXeT18d), unlike [](docId:yYCzPT8HHcbEZZMvfoCFa) which uses[](docId:hf2uumViqYvS1oq8TYbeW) to provide a S3-compatible protocol (the S3 protocol does not use client side encryption by design).
Please note - you can configure a native connector in rclone, (see: [](docId:Mk51zylAE6xmqP7jUYAuX)) and use [](docId:Pksf8d0TCLY2tBgXeT18d), unlike [](docId:yYCzPT8HHcbEZZMvfoCFa) which uses[](docId:hf2uumViqYvS1oq8TYbeW) to provide an S3-compatible protocol (the S3 protocol does not use client side encryption by design).
4 changes: 2 additions & 2 deletions app/learn/self-host/gateway-st/page.md
Original file line number Diff line number Diff line change
Expand Up @@ -488,13 +488,13 @@ If you use`localhost` or `127.0.0.1` as your `local_IP,` you will not be able to

You can use the [Minio caching technology](https://docs.min.io/docs/minio-disk-cache-guide.html) in conjunction with the hosting of a static website.

> The following example uses `/mnt/drive1`, `/mnt/drive2` ,`/mnt/cache1` ... `/mnt/cache3` for caching, while excluding all objects under bucket `mybucket` and all objects with '.pdf' extensions on a S3 Gateway setup. Objects are cached if they have been accessed three times or more. Cache max usage is restricted to 80% of disk capacity in this example. Garbage collection is triggered when the high watermark is reached (i.e. at 72% of cache disk usage) and will clear the least recently accessed entries until the disk usage drops to the low watermark - i.e. cache disk usage drops to 56% (70% of 80% quota).
> The following example uses `/mnt/drive1`, `/mnt/drive2` ,`/mnt/cache1` ... `/mnt/cache3` for caching, while excluding all objects under bucket `mybucket` and all objects with '.pdf' extensions on an S3 Gateway setup. Objects are cached if they have been accessed three times or more. Cache max usage is restricted to 80% of disk capacity in this example. Garbage collection is triggered when the high watermark is reached (i.e. at 72% of cache disk usage) and will clear the least recently accessed entries until the disk usage drops to the low watermark - i.e. cache disk usage drops to 56% (70% of 80% quota).

Export the environment variables before running the Gateway:

{% tabs %}
{% tab label="Windows" %}
Cache disks are not supported, because caching requires the [`atime`](http://kerolasa.github.io/filetimes.html) function to be enabled.
Cache disks are not supported, because caching requires the [`atime`](https://kerolasa.github.io/filetimes.html) function to be enabled.

```Text
$env:MINIO_CACHE="on"
Expand Down