Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: full blocks, bundling & compression, blob usage optimization & tracking #125

Merged
merged 229 commits into from
Oct 2, 2024

Conversation

segfault-magnet
Copy link
Contributor

@segfault-magnet segfault-magnet commented Sep 24, 2024

closes: #116
closes: #69
closes: #124 (thanks to @hal3e for contributing)
A big thank you to @MujkicA for all the research that went into this.

TLDR;

The committer will now stream full blocks from the fuel network, try to find an optimal way to bundle them so that we maximize value for money on L1 and proceed to post the bundle split into blobs to Ethereum.

Preliminary results on testnet fuel block data show that, if given enough time to optimize, we are consistently achieving a 3.0 compression ratio (5.0 + without block headers) and blob utilization of +96% (3% of that is blob encoding overhead, the rest is our compressed data). (Ran a test during the CEST day for about 3-4 hours with blocks_to_accumulate of 3600)

To bundle 1h worth of fuel blocks (around 3600 blocks) optimally takes around 20s for my setup (ryzen 9 7950x). We'll probably bump that time limit to 3-5 minutes in prod.

How it works

block_height_lookback_window is used to determine how many latest fuel blocks we should ensure are committed to l1. Current sentiment is that for the fraud proving we only need data for blocks that haven't yet been finalized. So this will probably be set to something around 600k-1M what is the current weekly block count of the testnet.

We request all blocks that fit the above description that we don't have in our database:
db

The next action happens when we either accumulate enough blocks (blocks_to_accumulate) or a timeout happens because we haven't submitted to L1 in a long time (accumulation_timeout).

We take whatever we have accumulated so far and give that to the block bundler:
bundler
The bundler will try out all possible bundles until the optimization_timeout runs out or we've exhausted all possibilities.

Upon choosing a bundle candidate it will compress it and ask L1 how much gas it would cost to post it. The bundler then chooses the best candidate, the one that achieved the best gas per uncompressed byte -- meaning it posted the most amount of data for the least gas. This is a trade-off where you might include more blocks to get a better compression, but you might end up having to pay for a full extra blob that you won't utilize.

Not all combinations of blocks are permissible, a bundle must start from the lowest height and have the blocks be sorted by height. So for blocks 1,2,3,4 possible candidates for bundling are: 1,2,3,4 | 1,2,3 | 1,2 | 1. The optimization_step config influences how the candidates are generated. The above example was for when the step is equal to 1. A step of 100 for a 1000 block bundle would generate the candidates whose last block has the height of: 1000, 900, ..., 100, 1, 950, 850, ..., 50, 975, 925, and so on where the step is halved upon reaching a bundle of only 1 block.

Upon finding the best proposal or upon the optimization_timeout the bundle will then be given to the L1 adapter so that it may be turned into Fragments (currently blobs because we target Ethereum):
blobs

Those blobs are saved in the database where they will be picked up later by the StateCommitter:
committer

The committer is configured to accumulate at least fragments_to_accumulate blobs before sending them together in a tx. If fragment_accumulation_timeout fires then we will submit whatever blobs we have.

This behavior was added so that we may optimize the base eth cost and not pay it once for every blob.

All timeouts (except the optimization one) are measured from the time we last had a finalized eth transaction. This was envisioned so that if we haven't submitted for an unacceptably long time we are not going to spend time waiting for more blocks or fragments.

Finally the state listener polls for the tx status:
Screenshot_2024-09-26_14-40-33

Some notes:

  • We never send a new eth tx until we confirm the last one was finalized or failed
  • We work on creating new bundles even if there is congestion on the network, we use the time to optimize bundles while we wait for the blob tx to be accepted.

Metrics:
Two new metrics were added:

    blobs_per_tx: prometheus::Histogram,
    blob_used_bytes: prometheus::Histogram,

which can be used to track how often we send out transactions with 1,2,3,...,6 blobs, and how utilized are those blobs.
Example query answering the question: "in 99% of cases in the last N minutes, what is the smallest amount of data we sent in a blob?"
grapy

MujkicA
MujkicA previously approved these changes Sep 30, 2024
digorithm
digorithm previously approved these changes Sep 30, 2024
hal3e
hal3e previously approved these changes Oct 1, 2024
@segfault-magnet segfault-magnet dismissed stale reviews from hal3e, digorithm, and MujkicA via 6bc9d36 October 1, 2024 07:43
Br1ght0ne
Br1ght0ne previously approved these changes Oct 1, 2024
@hal3e hal3e merged commit 29bf231 into master Oct 2, 2024
9 checks passed
@hal3e hal3e deleted the feat/blob_fragmentation branch October 2, 2024 10:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Submit full fuel blocks to l1, not just tx hashes blob utilization Publish crate shouldn't run on release
6 participants