Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Block size limit: make configurable #10618

Open
3 tasks done
hsanjuan opened this issue Dec 9, 2024 · 3 comments
Open
3 tasks done

Block size limit: make configurable #10618

hsanjuan opened this issue Dec 9, 2024 · 3 comments
Labels
kind/enhancement A net-new feature or improvement to an existing feature P2 Medium: Good to have, but can wait until someone steps up

Comments

@hsanjuan
Copy link
Contributor

hsanjuan commented Dec 9, 2024

Checklist

  • My issue is specific & actionable.
  • I am not suggesting a protocol enhancement.
  • I have searched on the issue tracker for my issue.

Description

This is a proposal to expose what is commonly known as the "block size limit" (which I think is actually the maximum number of bytes that can be read/written to a stream?), while keeping the default.

  • There is no reason that private ipfs network adjust the limit to whatever they want. This should be easily configurable and not require re-compiling ipfs.
  • I would argue, there is not reason that participants in the public ipfs network cannot decide to work with higher block limits (or smaller), based on their own needs.
  • Useful feedback can be provided to the user in the form of "block-too-large" errors when adding or fetching large blocks.

Rationale: currently, users are able to adjust resource-manager settings that limit how many connections they accept or how much memory they use. I don't see this setting as anything different. We don't consider it an anomaly if we cannot talk to a peer because our connections get killed by the resource manager. We should not worry either that we cannot download a block because it is too big for the limit.

@hsanjuan hsanjuan added the kind/enhancement A net-new feature or improvement to an existing feature label Dec 9, 2024
@gammazero gammazero added the P2 Medium: Good to have, but can wait until someone steps up label Dec 10, 2024
@DioBr4nd0
Copy link

Hii @gammazero @hsanjuan I would like to work on this issue

@lidel
Copy link
Member

lidel commented Dec 20, 2024

If you want to work on this, here are some pointers:

  • Add Import.BlockSizeLimit configuration option
    • make it optionalBytes, similar to Swarm.ResourceMgr.MaxMemory and define DefaultBlockSizeLimit for use WithDefault construct
    • use it instead of hardcoded 1MiB limit in core/commands/cmdutils/utils.go#SoftBlockLimit
      • this limit applies to commands like ipfs block put in --allow-big-block CLI param (AllowBigBlockOptionName)
  • Create a companion PR for boxo:
    • pass it to boxo/bitswap as configuration option
    • pass it to boxo/gateway as block and car backends may also have hardcoded block size limit of 1/2/4MiB somewhere.

@aschmahmann
Copy link
Contributor

@lidel @hsanjuan this doesn't seem like a good idea at the moment. Aside from whether it's a good idea in general, I suspect the UX issues and maintenance burden we'd be bringing on ourselves is not worthwhile at least without a good deal of other work done.

TLDR:

  • The amount of work to just plumb through to the user feedback of why a request has failed (no peers, peers but they're offline, peers but they speak the wrong protocols, peers but all the ones who say they have it claim the block is too big) is a large issue by itself that seems like a pre-requisite (and is independently useful) before considering this work
  • This seems like a big pile of work and long term maintenance that will result in us having a more fragmented ecosystem... who wants this / will tell us if it's doing the job for them well enough or not?

Useful feedback can be provided to the user in the form of "block-too-large" errors when adding or fetching large blocks.

We won't be able to really implement this in bitswap without a protocol change. It's a small + doable one, but would be needed for this.

Additionally, we'd need a way to surface that to the user. At the moment the data transfer stack won't surface any errors like this to the user so there's no way for them to get notified about "block-too-large". Note: you also can't even really trust the response from a peer here unless there's some associated proof which would make the protocol change even more complicated.

users are able to adjust resource-manager settings that limit how many connections they accept or how much memory they use. I don't see this setting as anything different.

This is very different from resource-manager limits. It's probably closer to how there is a packet size limit within networks and as a result users of protocols like DNS need to be careful that if their records are too big some users won't be able to get them (e.g. because it'll work for TCP, but drop for UDP depending on the network). While people still certainly use jumbo frames within environments they control (e.g. private networks) doing this more broadly is generally asking for problems.

We should not worry either that we cannot download a block because it is too big for the limit.

Related to the above, do we have enough people who want to support 4, 10, 100, .... whatever MiB blocks to justify the changes here knowing that it'll complicate the portability story (e.g. which gateways, pinning services, etc. support which block sizes) and potentially make it impossible for groups that started off operating within isolated environments to practically join the public network since all their content addressable links will no longer be usable.

pass it to boxo/bitswap as configuration option
bitswap will refuse messages bigger than 2 MiB, maxMessageSize is hardcoded here

There's a little bit of messing around you could do there... but it's not valuable if you're looking for an increase because the message (i.e. including all wrapping and metadata) size limit is by specification 4MiB https://github.com/ipfs/specs/blob/d2c2479d19f0144c42b217a870b247bbc4485bd9/src/bitswap-protocol.md?plain=1#L130 (implementation: https://github.com/ipfs/boxo/blob/17db35a2e435532b04823e9b447b465c834e4d37/bitswap/network/ipfs_impl.go#L429).

The result is that you'd need to do some (maybe not huge depending on the approach) protocol work to make this bigger as well. You might have to consider if people running with different parameters here would also need a forked protocol ID so as not to confuse clients, or whether it'd be reasonable to keep the protocol IDs the same.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement A net-new feature or improvement to an existing feature P2 Medium: Good to have, but can wait until someone steps up
Projects
None yet
Development

No branches or pull requests

5 participants