Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf(files): Adjust default chunk size for chunked upload to 100MiB #48461

Merged
merged 1 commit into from
Sep 30, 2024

Conversation

provokateurin
Copy link
Member

Summary

Step 1 of #47682

Checklist

@@ -44,7 +44,7 @@
public static function extendJsConfig($settings): void {
$appConfig = json_decode($settings['array']['oc_appconfig'], true);

$maxChunkSize = (int)Server::get(IConfig::class)->getAppValue('files', 'max_chunk_size', (string)(10 * 1024 * 1024));
$maxChunkSize = (int)Server::get(IConfig::class)->getAppValue('files', 'max_chunk_size', (string)(100 * 1024 * 1024));

Check notice

Code scanning / Psalm

DeprecatedMethod Note

The method OCP\IConfig::getAppValue has been marked as deprecated
@joshtrichards
Copy link
Member

joshtrichards commented Sep 30, 2024

If we do this at 100 (rather just under it, like 99 I suppose?), it's going to hit installations using the Web client via Cloudflare immediately with breakage. This already happened with the Desktop client and is why we have nextcloud/desktop#4278 as a top-3 most reacted to bug at the moment (with solutions proposed such as nextcloud/desktop#5813 and nextcloud/desktop#4826).

At a minimum, we need to document this as a breaking change IMO because people are going to suddenly have problems with our Web client too and therefore will need to figure out how to change the max_chunk_size.

Even if documented, we're going to be spending time fielding false bug reports (but legitimate feedback IMO) + hear about it over and over again on the forum.

I want to increase this value too, but we need to decide whether it's acceptable to deal with the headaches introduced before we have the dynamic chunk size stuff in place. Or choose a choose a pragmatic intermediate path I guess like going with an odd number like 99.

I don't like accommodating for an arbitrary outside vendor, but we can't avoid reality. The two biggest external limits are:

EDIT: 100 may be fine actually. The 99 was in my brain to be conservative. :) I don't see any evidence we had any problems until nextcloud/desktop#3600 was merged which bumped 100->1000 so I guess all was good at that point.

@provokateurin
Copy link
Member Author

@joshtrichards Thanks for the hint, so 99 would actually be the limit we should chose to avoid problems? That would at least make sense if Cloudflare limits to 100 and the little overhead each HTTP request has.

@joshtrichards
Copy link
Member

I just went back and reviewed things again more closely. 100 may be fine actually. The 99 was in my brain to be conservative. :) I don't see any substantial evidence we had any problems until nextcloud/desktop#3600 was merged and released via 3.4.x (which bumped 100->1000 so I guess all was good right at 100).

@provokateurin
Copy link
Member Author

Yeah I also just found https://developers.cloudflare.com/workers/platform/limits/#request-limits and it says that the body size is limited to 100MB and not the overall size. I'm not sure if they really mean MB or MiB, as with the former the actual limit in MiB would be around 95.37MiB.
If we find problems later we can fix them, hopefully that happens before final releases (but there is plenty of time until 31).

@provokateurin provokateurin merged commit ced915a into master Sep 30, 2024
172 of 173 checks passed
@provokateurin provokateurin deleted the perf/files/chunked-upload-default-100-mib branch September 30, 2024 16:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants