Skip to content

Commit

Permalink
PeerDAS implementation (sigp#5683)
Browse files Browse the repository at this point in the history
* 1D PeerDAS prototype: Data format and Distribution (sigp#5050)

* Build and publish column sidecars. Add stubs for gossip.

* Add blob column subnets

* Add `BlobColumnSubnetId` and initial compute subnet logic.

* Subscribe to blob column subnets.

* Introduce `BLOB_COLUMN_SUBNET_COUNT` based on DAS configuration parameter changes.

* Fix column sidecar type to use `VariableList` for data.

* Fix lint errors.

* Update types and naming to latest consensus-spec sigp#3574.

* Fix test and some cleanups.

* Merge branch 'unstable' into das

* Merge branch 'unstable' into das

* Merge branch 'unstable' into das

# Conflicts:
#	consensus/types/src/chain_spec.rs

* Add `DataColumnSidecarsByRoot ` req/resp protocol (sigp#5196)

* Add stub for `DataColumnsByRoot`

* Add basic implementation of serving RPC data column from DA checker.

* Store data columns in early attester cache and blobs db.

* Apply suggestions from code review

Co-authored-by: Eitan Seri-Levi <[email protected]>
Co-authored-by: Jacob Kaufmann <[email protected]>

* Fix build.

* Store `DataColumnInfo` in database and various cleanups.

* Update `DataColumnSidecar` ssz max size and remove panic code.

---------

Co-authored-by: Eitan Seri-Levi <[email protected]>
Co-authored-by: Jacob Kaufmann <[email protected]>

* feat: add DAS KZG in data col construction (sigp#5210)

* feat: add DAS KZG in data col construction

* refactor data col sidecar construction

* refactor: add data cols to GossipVerifiedBlockContents

* Disable windows tests for `das` branch. (c-kzg doesn't build on windows)

* Formatting and lint changes only.

* refactor: remove iters in construction of data cols

* Update vec capacity and error handling.

* Add `data_column_sidecar_computation_seconds` metric.

---------

Co-authored-by: Jimmy Chen <[email protected]>

* Merge branch 'unstable' into das

# Conflicts:
#	.github/workflows/test-suite.yml
#	beacon_node/lighthouse_network/src/types/topics.rs

* fix: update data col subnet count from 64 to 32 (sigp#5413)

* feat: add peerdas custody field to ENR (sigp#5409)

* feat: add peerdas custody field to ENR

* add hash prefix step in subnet computation

* refactor test and fix possible u64 overflow

* default to min custody value if not present in ENR

* Merge branch 'unstable' into das

* Merge branch 'unstable' into das-unstable-merge-0415

# Conflicts:
#	Cargo.lock
#	beacon_node/beacon_chain/src/data_availability_checker.rs
#	beacon_node/beacon_chain/src/data_availability_checker/availability_view.rs
#	beacon_node/beacon_chain/src/data_availability_checker/overflow_lru_cache.rs
#	beacon_node/beacon_chain/src/data_availability_checker/processing_cache.rs
#	beacon_node/lighthouse_network/src/rpc/methods.rs
#	beacon_node/network/src/network_beacon_processor/mod.rs
#	beacon_node/network/src/sync/block_lookups/tests.rs
#	crypto/kzg/Cargo.toml

* Merge remote-tracking branch 'sigp/unstable' into das

* Merge remote-tracking branch 'sigp/unstable' into das

* Fix merge conflicts.

* Send custody data column to `DataAvailabilityChecker` for determining block importability (sigp#5570)

* Only import custody data columns after publishing a block.

* Add `subscribe-all-data-column-subnets` and pass custody column count to `availability_cache`.

* Add custody requirement checks to `availability_cache`.

* Fix config not being passed to DAChecker and add more logging.

* Introduce `peer_das_epoch` and make blobs and columns mutually exclusive.

* Add DA filter for PeerDAS.

* Fix data availability check and use test_logger in tests.

* Fix subscribe to all data column subnets not working correctly.

* Fix tests.

* Only publish column sidecars if PeerDAS is activated. Add `PEER_DAS_EPOCH` chain spec serialization.

* Remove unused data column index in `OverflowKey`.

* Fix column sidecars incorrectly produced when there are no blobs.

* Re-instate index to `OverflowKey::DataColumn` and downgrade noisy debug log to `trace`.

* DAS sampling on sync (sigp#5616)

* Data availability sampling on sync

* Address @jimmygchen review

* Trigger sampling

* Address some review comments and only send `SamplingBlock` sync message after PEER_DAS_EPOCH.

---------

Co-authored-by: Jimmy Chen <[email protected]>

* Merge branch 'unstable' into das

# Conflicts:
#	Cargo.lock
#	Cargo.toml
#	beacon_node/beacon_chain/src/block_verification.rs
#	beacon_node/http_api/src/publish_blocks.rs
#	beacon_node/lighthouse_network/src/rpc/codec/ssz_snappy.rs
#	beacon_node/lighthouse_network/src/rpc/protocol.rs
#	beacon_node/lighthouse_network/src/types/pubsub.rs
#	beacon_node/network/src/sync/block_lookups/single_block_lookup.rs
#	beacon_node/store/src/hot_cold_store.rs
#	consensus/types/src/beacon_state.rs
#	consensus/types/src/chain_spec.rs
#	consensus/types/src/eth_spec.rs

* Merge branch 'unstable' into das

* Re-process early sampling requests (sigp#5569)

* Re-process early sampling requests

# Conflicts:
#	beacon_node/beacon_processor/src/work_reprocessing_queue.rs
#	beacon_node/lighthouse_network/src/rpc/methods.rs
#	beacon_node/network/src/network_beacon_processor/rpc_methods.rs

* Update beacon_node/beacon_processor/src/work_reprocessing_queue.rs

Co-authored-by: Jimmy Chen <[email protected]>

* Add missing var

* Beta compiler fixes and small typo fixes.

* Remove duplicate method.

---------

Co-authored-by: Jimmy Chen <[email protected]>

* Merge remote-tracking branch 'sigp/unstable' into das

* Fix merge conflict.

* Add data columns by root to currently supported protocol list (sigp#5678)

* Add data columns by root to currently supported protocol list.

* Add missing data column by roots handling.

* Merge branch 'unstable' into das

# Conflicts:
#	Cargo.lock
#	Cargo.toml
#	beacon_node/network/src/sync/block_lookups/tests.rs
#	beacon_node/network/src/sync/manager.rs

* Fix simulator tests on `das` branch (sigp#5731)

* Bump genesis delay in sim tests as KZG setup takes longer for DAS.

* Fix incorrect YAML spacing.

* DataColumnByRange boilerplate (sigp#5353)

* add boilerplate

* fmt

* PeerDAS custody lookup sync (sigp#5684)

* Implement custody sync

* Lint

* Fix tests

* Fix rebase issue

* Add data column kzg verification and update `c-kzg`. (sigp#5701)

* Add data column kzg verification and update `c-kzg`.

* Fix incorrect `Cell` size.

* Add kzg verification on rpc blocks.

* Add kzg verification on rpc data columns.

* Rename `PEER_DAS_EPOCH` to `EIP7594_FORK_EPOCH` for client interop. (sigp#5750)

* Fetch custody columns in range sync (sigp#5747)

* Fetch custody columns in range sync

* Clean up todos

* Remove `BlobSidecar` construction and publish after PeerDAS activated (sigp#5759)

* Avoid building and publishing blob sidecars after PeerDAS.

* Ignore gossip blobs with a slot greater than peer das activation epoch.

* Only attempt to verify blob count and import blobs before PeerDAS.

* sigp#5684 review comments (sigp#5748)

* sigp#5684 review comments.

* Doc and message update only.

* Fix incorrect condition when constructing `RpcBlock` with `DataColumn`s

* Make sampling tests deterministic (sigp#5775)

* PeerDAS spec tests (sigp#5772)

* Add get_custody_columns spec tests.

* Add kzg merkle proof spec tests.

* Add SSZ spec tests.

* Add remaining KZG tests

* Load KZG only once per process, exclude electra tests and add missing SSZ tests.

* Fix lint and missing changes.

* Ignore macOS generated file.

* Merge remote branch 'sigp/unstable' into das

* Merge remote tracking branch 'origin/unstable' into das

* Implement unconditional reconstruction for supernodes (sigp#5781)

* Implement unconditional reconstruction for supernodes

* Move code into KzgVerifiedCustodyDataColumn

* Remove expect

* Add test

* Thanks justin

* Add withhold attack mode for interop (sigp#5788)

* Add withhold attack mode

* Update readme

* Drop added readmes

* Undo styling changes

* Add column gossip verification and handle unknown parent block (sigp#5783)

* Add column gossip verification and handle missing parent for columns.

* Review PR

* Fix rebase issue

* more lint issues :)

---------

Co-authored-by: dapplion <[email protected]>

* Trigger sampling on sync events (sigp#5776)

* Trigger sampling on sync events

* Update beacon_chain.rs

* Fix tests

* Fix tests

* PeerDAS parameter changes for devnet-0 (sigp#5779)

* Update PeerDAS parameters to latest values.

* Lint fix

* Fix lint.

* Update hardcoded subnet count to 64 (sigp#5791)

* Fix incorrect columns per subnet and config cleanup (sigp#5792)

* Tidy up PeerDAS preset and config values.

* Fix broken config

* Fix DAS branch CI (sigp#5793)

* Fix invalid syntax.

* Update cli doc. Ignore get_custody_columns test temporarily.

* Fix failing test and add verify inclusion test.

* Undo accidentally removed code.

* Only attempt reconstruct columns once. (sigp#5794)

* Re-enable precompute table for peerdas kzg (sigp#5795)

* Merge branch 'unstable' into das

* Update subscription filter. (sigp#5797)

* Remove penalty for duplicate columns (expected due to reconstruction) (sigp#5798)

* Revert DAS config for interop testing. Optimise get_custody_columns function. (sigp#5799)

* Don't perform reconstruction for proposer node as it already has all the columns. (sigp#5806)

* Multithread compute_cells_and_proofs (sigp#5805)

* Multi-thread reconstruct data columns

* Multi-thread path for block production

* Merge branch 'unstable' into das

# Conflicts:
#	.github/workflows/test-suite.yml
#	beacon_node/network/src/sync/block_lookups/mod.rs
#	beacon_node/network/src/sync/block_lookups/single_block_lookup.rs
#	beacon_node/network/src/sync/network_context.rs

* Fix CI errors.

* Move PeerDAS type-level config to configurable `ChainSpec` (sigp#5828)

* Move PeerDAS type level config to `ChainSpec`.

* Fix tests

* Misc custody lookup improvements (sigp#5821)

* Improve custody requests

* Type DataColumnsByRootRequestId

* Prioritize peers and load balance

* Update tests

* Address PR review

* Merge branch 'unstable' into das

* Rename deploy_block in network config (`das` branch) (sigp#5852)

* Rename deploy_block.txt to deposit_contract_block.txt

* fmt

---------

Co-authored-by: Pawan Dhananjay <[email protected]>

* Merge branch 'unstable' into das

* Fix CI and merge issues.

* Merge branch 'unstable' into das

# Conflicts:
#	beacon_node/beacon_chain/src/data_availability_checker/overflow_lru_cache.rs
#	lcli/src/main.rs

* Store data columns individually in store and caches (sigp#5890)

* Store data columns individually in store and caches

* Implement data column pruning

* Merge branch 'unstable' into das

# Conflicts:
#	Cargo.lock

* Update reconstruction benches to newer criterion version. (sigp#5949)

* Merge branch 'unstable' into das

# Conflicts:
#	.github/workflows/test-suite.yml

* chore: add `recover_cells_and_compute_proofs` method (sigp#5938)

* chore: add recover_cells_and_compute_proofs method

* Introduce type alias `CellsAndKzgProofs` to address type complexity.

---------

Co-authored-by: Jimmy Chen <[email protected]>

* Update `csc` format in ENR and spec tests for devnet-1 (sigp#5966)

* Update `csc` format in ENR.

* Add spec tests for `recover_cells_and_kzg_proofs`.

* Add tests for ENR.

* Fix failing tests.

* Add protection against invalid csc value in ENR.

* Fix lint

* Fix csc encoding and decoding (sigp#5997)

* Fix data column rpc request not being sent due to incorrect limits set. (sigp#6000)

* Fix incorrect inbound request count causing rate limiting. (sigp#6025)

* Merge branch 'stable' into das

# Conflicts:
#	beacon_node/network/src/sync/block_lookups/tests.rs
#	beacon_node/network/src/sync/block_sidecar_coupling.rs
#	beacon_node/network/src/sync/manager.rs
#	beacon_node/network/src/sync/network_context.rs
#	beacon_node/network/src/sync/network_context/requests.rs

* Merge remote-tracking branch 'unstable' into das

* Add kurtosis config for DAS testing (sigp#5968)

* Add kurtosis config for DAS testing.

* Fix invalid yaml file

* Update network parameter files.

* chore: add rust PeerdasKZG crypto library for peerdas functionality and rollback c-kzg dependency to 4844 version (sigp#5941)

* chore: add recover_cells_and_compute_proofs method

* chore: add rust peerdas crypto library

* chore: integrate peerdaskzg rust library into kzg crate

* chore(multi):

- update `ssz_cell_to_crypto_cell`
- update conversion from the crypto cell type to a Vec<u8>. Since the Rust library defines them as references to an array, the conversion is simply `to_vec`

* chore(multi):

- update rest of code to handle the new crypto `Cell` type
- update test case code to no longer use the Box type

* chore: cleanup of superfluous conversions

* chore: revert c-kzg dependency back to v1

* chore: move dependency into correct order

* chore: update rust dependency

- This version includes a new method `PeerDasContext::with_num_threads`

* chore: remove Default initialization of PeerDasContext and explicitly set the parameters in `new_from_trusted_setup`

* chore: cleanup exports

* chore: commit updated cargo.lock

* Update Cargo.toml

Co-authored-by: Jimmy Chen <[email protected]>

* chore: rename dependency

* chore: update peerdas lib

- sets the blst version to 0.3 so that it matches whatever lighthouse is using. Although 0.3.12 is latest, lighthouse is pinned to 0.3.3

* chore: fix clippy lifetime

- Rust doesn't allow you to elide the lifetime on type aliases

* chore: cargo clippy fix

* chore: cargo fmt

* chore: update lib to add redundant checks (these will be removed in consensus-specs PR 3819)

* chore: update dependency to ignore proofs

* chore: update peerdas lib to latest

* update lib

* chore: remove empty proof parameter

---------

Co-authored-by: Jimmy Chen <[email protected]>

* Update PeerDAS interop testnet config (sigp#6069)

* Update interop testnet config.

* Fix typo and remove target peers

* Avoid retrying same sampling peer that previously failed. (sigp#6084)

* Various fixes to custody range sync  (sigp#6004)

* Only start requesting batches when there are good peers across all custody columns to avoid spaming block requests.

* Add custody peer check before mutating `BatchInfo` to avoid inconsistent state.

* Add check to cover a case where batch is not processed while waiting for custody peers to become available.

* Fix lint and logic error

* Fix `good_peers_on_subnet` always returning false for `DataColumnSubnet`.

* Add test for `get_custody_peers_for_column`

* Revert epoch parameter refactor.

* Fall back to default custody requiremnt if peer ENR is not present.

* Add metrics and update code comment.

* Add more debug logs.

* Use subscribed peers on subnet before MetaDataV3 is implemented. Remove peer_id matching when injecting error because multiple peers are used for range requests. Use randomized custodial peer to avoid repeatedly sending requests to failing peers. Batch by range request where possible.

* Remove unused code and update docs.

* Add comment

* chore: update peerdas-kzg library (sigp#6118)

* chore: update peerDAS lib

* chore: update library

* chore: update library to version that include "init context" benchmarks and optional validation checks

* chore: (can remove) -- Add benchmarks for init context

* Prevent continuous searchers for low-peer networks (sigp#6162)

* Merge branch 'unstable' into das

* Fix merge conflicts

* Add cli flag to enable sampling and disable by default. (sigp#6209)

* chore: Use reference to an array representing a blob instead of an owned KzgBlob (sigp#6179)

* add KzgBlobRef type

* modify code to use KzgBlobRef

* clippy

* Remove Deneb blob related changes to maintain compatibility with `c-kzg-4844`.

---------

Co-authored-by: Jimmy Chen <[email protected]>

* Store computed custody subnets in PeerDB and fix custody lookup test (sigp#6218)

* Fix failing custody lookup tests.

* Store custody subnets in PeerDB, fix custody lookup test and refactor some methods.

* Merge branch 'unstable' into das

# Conflicts:
#	beacon_node/beacon_chain/src/beacon_chain.rs
#	beacon_node/beacon_chain/src/block_verification_types.rs
#	beacon_node/beacon_chain/src/builder.rs
#	beacon_node/beacon_chain/src/data_availability_checker.rs
#	beacon_node/beacon_chain/src/data_availability_checker/overflow_lru_cache.rs
#	beacon_node/beacon_chain/src/data_column_verification.rs
#	beacon_node/beacon_chain/src/early_attester_cache.rs
#	beacon_node/beacon_chain/src/historical_blocks.rs
#	beacon_node/beacon_chain/tests/store_tests.rs
#	beacon_node/lighthouse_network/src/discovery/enr.rs
#	beacon_node/network/src/service.rs
#	beacon_node/src/cli.rs
#	beacon_node/store/src/hot_cold_store.rs
#	beacon_node/store/src/lib.rs
#	lcli/src/generate_bootnode_enr.rs

* Fix CI failures after merge.

* Batch sampling requests by peer (sigp#6256)

* Batch sampling requests by peer

* Fix clippy errors

* Fix tests

* Add column_index to error message for ease of tracing

* Remove outdated comment

* Fix range sync never evaluating request as finished, causing it to get stuck. (sigp#6276)

* Merge branch 'unstable' into das-0821-merge

# Conflicts:
#	Cargo.lock
#	Cargo.toml
#	beacon_node/beacon_chain/src/beacon_chain.rs
#	beacon_node/beacon_chain/src/data_availability_checker.rs
#	beacon_node/beacon_chain/src/data_availability_checker/overflow_lru_cache.rs
#	beacon_node/beacon_chain/src/data_column_verification.rs
#	beacon_node/beacon_chain/src/kzg_utils.rs
#	beacon_node/beacon_chain/src/metrics.rs
#	beacon_node/beacon_processor/src/lib.rs
#	beacon_node/lighthouse_network/src/rpc/codec/ssz_snappy.rs
#	beacon_node/lighthouse_network/src/rpc/config.rs
#	beacon_node/lighthouse_network/src/rpc/methods.rs
#	beacon_node/lighthouse_network/src/rpc/outbound.rs
#	beacon_node/lighthouse_network/src/rpc/rate_limiter.rs
#	beacon_node/lighthouse_network/src/service/api_types.rs
#	beacon_node/lighthouse_network/src/types/globals.rs
#	beacon_node/network/src/network_beacon_processor/mod.rs
#	beacon_node/network/src/network_beacon_processor/rpc_methods.rs
#	beacon_node/network/src/network_beacon_processor/sync_methods.rs
#	beacon_node/network/src/sync/block_lookups/common.rs
#	beacon_node/network/src/sync/block_lookups/mod.rs
#	beacon_node/network/src/sync/block_lookups/single_block_lookup.rs
#	beacon_node/network/src/sync/block_lookups/tests.rs
#	beacon_node/network/src/sync/manager.rs
#	beacon_node/network/src/sync/network_context.rs
#	consensus/types/src/data_column_sidecar.rs
#	crypto/kzg/Cargo.toml
#	crypto/kzg/benches/benchmark.rs
#	crypto/kzg/src/lib.rs

* Fix custody tests and load PeerDAS KZG instead.

* Fix ef tests and bench compilation.

* Fix failing sampling test.

* Merge pull request sigp#6287 from jimmygchen/das-0821-merge

Merge `unstable` into `das` 20240821

* Remove get_block_import_status

* Merge branch 'unstable' into das

* Re-enable Windows release tests.

* Address some review comments.

* Address more review comments and cleanups.

* Comment out peer DAS KZG EF tests for now

* Address more review comments and fix build.

* Merge branch 'das' of github.com:sigp/lighthouse into das

* Unignore Electra tests

* Fix metric name

* Address some of Pawan's review comments

* Merge remote-tracking branch 'origin/unstable' into das

* Update PeerDAS network parameters for peerdas-devnet-2  (sigp#6290)

* update subnet count & custody req

* das network params

* update ef tests

---------

Co-authored-by: Jimmy Chen <[email protected]>
  • Loading branch information
2 people authored and AgeManning committed Sep 3, 2024
1 parent 388ec32 commit bddd3d9
Show file tree
Hide file tree
Showing 96 changed files with 5,002 additions and 609 deletions.
2 changes: 2 additions & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

5 changes: 5 additions & 0 deletions beacon_node/beacon_chain/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,10 @@ authors = ["Paul Hauner <[email protected]>", "Age Manning <[email protected]
edition = { workspace = true }
autotests = false # using a single test binary compiles faster

[[bench]]
name = "benches"
harness = false

[features]
default = ["participation_metrics"]
write_ssz_files = [] # Writes debugging .ssz files to /tmp during block processing.
Expand All @@ -16,6 +20,7 @@ test_backfill = []
[dev-dependencies]
maplit = { workspace = true }
serde_json = { workspace = true }
criterion = { workspace = true }

[dependencies]
bitvec = { workspace = true }
Expand Down
66 changes: 66 additions & 0 deletions beacon_node/beacon_chain/benches/benches.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
use std::sync::Arc;

use beacon_chain::kzg_utils::{blobs_to_data_column_sidecars, reconstruct_data_columns};
use criterion::{black_box, criterion_group, criterion_main, Criterion};

use bls::Signature;
use eth2_network_config::TRUSTED_SETUP_BYTES;
use kzg::{Kzg, KzgCommitment, TrustedSetup};
use types::{
beacon_block_body::KzgCommitments, BeaconBlock, BeaconBlockDeneb, Blob, BlobsList, ChainSpec,
EmptyBlock, EthSpec, MainnetEthSpec, SignedBeaconBlock,
};

fn create_test_block_and_blobs<E: EthSpec>(
num_of_blobs: usize,
spec: &ChainSpec,
) -> (SignedBeaconBlock<E>, BlobsList<E>) {
let mut block = BeaconBlock::Deneb(BeaconBlockDeneb::empty(spec));
let mut body = block.body_mut();
let blob_kzg_commitments = body.blob_kzg_commitments_mut().unwrap();
*blob_kzg_commitments =
KzgCommitments::<E>::new(vec![KzgCommitment::empty_for_testing(); num_of_blobs]).unwrap();

let signed_block = SignedBeaconBlock::from_block(block, Signature::empty());

let blobs = (0..num_of_blobs)
.map(|_| Blob::<E>::default())
.collect::<Vec<_>>()
.into();

(signed_block, blobs)
}

fn all_benches(c: &mut Criterion) {
type E = MainnetEthSpec;
let spec = Arc::new(E::default_spec());

let trusted_setup: TrustedSetup = serde_json::from_reader(TRUSTED_SETUP_BYTES)
.map_err(|e| format!("Unable to read trusted setup file: {}", e))
.expect("should have trusted setup");
let kzg = Arc::new(Kzg::new_from_trusted_setup(trusted_setup).expect("should create kzg"));

for blob_count in [1, 2, 3, 6] {
let kzg = kzg.clone();
let (signed_block, blob_sidecars) = create_test_block_and_blobs::<E>(blob_count, &spec);

let column_sidecars =
blobs_to_data_column_sidecars(&blob_sidecars, &signed_block, &kzg.clone(), &spec)
.unwrap();

let spec = spec.clone();

c.bench_function(&format!("reconstruct_{}", blob_count), |b| {
b.iter(|| {
black_box(reconstruct_data_columns(
&kzg,
&column_sidecars.iter().as_slice()[0..column_sidecars.len() / 2],
spec.as_ref(),
))
})
});
}
}

criterion_group!(benches, all_benches);
criterion_main!(benches);
117 changes: 89 additions & 28 deletions beacon_node/beacon_chain/src/beacon_chain.rs
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ pub use crate::canonical_head::CanonicalHead;
use crate::chain_config::ChainConfig;
use crate::data_availability_checker::{
Availability, AvailabilityCheckError, AvailableBlock, DataAvailabilityChecker,
DataColumnsToPublish,
};
use crate::data_column_verification::{GossipDataColumnError, GossipVerifiedDataColumn};
use crate::early_attester_cache::EarlyAttesterCache;
Expand Down Expand Up @@ -123,6 +124,7 @@ use task_executor::{ShutdownReason, TaskExecutor};
use tokio_stream::Stream;
use tree_hash::TreeHash;
use types::blob_sidecar::FixedBlobSidecarList;
use types::data_column_sidecar::{ColumnIndex, DataColumnIdentifier};
use types::payload::BlockProductionVersion;
use types::*;

Expand Down Expand Up @@ -206,11 +208,13 @@ impl TryInto<Hash256> for AvailabilityProcessingStatus {
/// The result of a chain segment processing.
pub enum ChainSegmentResult<E: EthSpec> {
/// Processing this chain segment finished successfully.
Successful { imported_blocks: usize },
Successful {
imported_blocks: Vec<(Hash256, Slot)>,
},
/// There was an error processing this chain segment. Before the error, some blocks could
/// have been imported.
Failed {
imported_blocks: usize,
imported_blocks: Vec<(Hash256, Slot)>,
error: BlockError<E>,
},
}
Expand Down Expand Up @@ -2696,7 +2700,7 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
chain_segment: Vec<RpcBlock<T::EthSpec>>,
) -> Result<Vec<HashBlockTuple<T::EthSpec>>, ChainSegmentResult<T::EthSpec>> {
// This function will never import any blocks.
let imported_blocks = 0;
let imported_blocks = vec![];
let mut filtered_chain_segment = Vec::with_capacity(chain_segment.len());

// Produce a list of the parent root and slot of the child of each block.
Expand Down Expand Up @@ -2802,7 +2806,7 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
chain_segment: Vec<RpcBlock<T::EthSpec>>,
notify_execution_layer: NotifyExecutionLayer,
) -> ChainSegmentResult<T::EthSpec> {
let mut imported_blocks = 0;
let mut imported_blocks = vec![];

// Filter uninteresting blocks from the chain segment in a blocking task.
let chain = self.clone();
Expand Down Expand Up @@ -2862,6 +2866,7 @@ impl<T: BeaconChainTypes> BeaconChain<T> {

// Import the blocks into the chain.
for signature_verified_block in signature_verified_blocks {
let block_slot = signature_verified_block.slot();
match self
.process_block(
signature_verified_block.block_root(),
Expand All @@ -2874,9 +2879,9 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
{
Ok(status) => {
match status {
AvailabilityProcessingStatus::Imported(_) => {
AvailabilityProcessingStatus::Imported(block_root) => {
// The block was imported successfully.
imported_blocks += 1;
imported_blocks.push((block_root, block_slot));
}
AvailabilityProcessingStatus::MissingComponents(slot, block_root) => {
warn!(self.log, "Blobs missing in response to range request";
Expand Down Expand Up @@ -2909,6 +2914,17 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
ChainSegmentResult::Successful { imported_blocks }
}

/// Updates fork-choice node into a permanent `available` state so it can become a viable head.
/// Only completed sampling results are received. Blocks are unavailable by default and should
/// be pruned on finalization, on a timeout or by a max count.
pub async fn process_sampling_completed(self: &Arc<Self>, block_root: Hash256) {
// TODO(das): update fork-choice
// NOTE: It is possible that sampling complets before block is imported into fork choice,
// in that case we may need to update availability cache.
// TODO(das): These log levels are too high, reduce once DAS matures
info!(self.log, "Sampling completed"; "block_root" => %block_root);
}

/// Returns `Ok(GossipVerifiedBlock)` if the supplied `block` should be forwarded onto the
/// gossip network. The block is not imported into the chain, it is just partially verified.
///
Expand Down Expand Up @@ -2983,6 +2999,11 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
return Err(BlockError::BlockIsAlreadyKnown(blob.block_root()));
}

// No need to process and import blobs beyond the PeerDAS epoch.
if self.spec.is_peer_das_enabled_for_epoch(blob.epoch()) {
return Err(BlockError::BlobNotRequired(blob.slot()));
}

if let Some(event_handler) = self.event_handler.as_ref() {
if event_handler.has_blob_sidecar_subscribers() {
event_handler.register(EventKind::BlobSidecar(SseBlobSidecar::from_blob_sidecar(
Expand All @@ -3000,7 +3021,13 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
pub async fn process_gossip_data_columns(
self: &Arc<Self>,
data_columns: Vec<GossipVerifiedDataColumn<T>>,
) -> Result<AvailabilityProcessingStatus, BlockError<T::EthSpec>> {
) -> Result<
(
AvailabilityProcessingStatus,
DataColumnsToPublish<T::EthSpec>,
),
BlockError<T::EthSpec>,
> {
let Ok((slot, block_root)) = data_columns
.iter()
.map(|c| (c.slot(), c.block_root()))
Expand Down Expand Up @@ -3067,7 +3094,13 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
pub async fn process_rpc_custody_columns(
self: &Arc<Self>,
custody_columns: DataColumnSidecarList<T::EthSpec>,
) -> Result<AvailabilityProcessingStatus, BlockError<T::EthSpec>> {
) -> Result<
(
AvailabilityProcessingStatus,
DataColumnsToPublish<T::EthSpec>,
),
BlockError<T::EthSpec>,
> {
let Ok((slot, block_root)) = custody_columns
.iter()
.map(|c| (c.slot(), c.block_root()))
Expand All @@ -3094,7 +3127,7 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
let r = self
.check_rpc_custody_columns_availability_and_import(slot, block_root, custody_columns)
.await;
self.remove_notified(&block_root, r)
self.remove_notified_custody_columns(&block_root, r)
}

/// Remove any block components from the *processing cache* if we no longer require them. If the
Expand All @@ -3114,13 +3147,15 @@ impl<T: BeaconChainTypes> BeaconChain<T> {

/// Remove any block components from the *processing cache* if we no longer require them. If the
/// block was imported full or erred, we no longer require them.
fn remove_notified_custody_columns(
fn remove_notified_custody_columns<P>(
&self,
block_root: &Hash256,
r: Result<AvailabilityProcessingStatus, BlockError<T::EthSpec>>,
) -> Result<AvailabilityProcessingStatus, BlockError<T::EthSpec>> {
let has_missing_components =
matches!(r, Ok(AvailabilityProcessingStatus::MissingComponents(_, _)));
r: Result<(AvailabilityProcessingStatus, P), BlockError<T::EthSpec>>,
) -> Result<(AvailabilityProcessingStatus, P), BlockError<T::EthSpec>> {
let has_missing_components = matches!(
r,
Ok((AvailabilityProcessingStatus::MissingComponents(_, _), _))
);
if !has_missing_components {
self.reqresp_pre_import_cache.write().remove(block_root);
}
Expand Down Expand Up @@ -3378,20 +3413,26 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
slot: Slot,
block_root: Hash256,
data_columns: Vec<GossipVerifiedDataColumn<T>>,
) -> Result<AvailabilityProcessingStatus, BlockError<T::EthSpec>> {
) -> Result<
(
AvailabilityProcessingStatus,
DataColumnsToPublish<T::EthSpec>,
),
BlockError<T::EthSpec>,
> {
if let Some(slasher) = self.slasher.as_ref() {
for data_colum in &data_columns {
slasher.accept_block_header(data_colum.signed_block_header());
}
}

let availability = self.data_availability_checker.put_gossip_data_columns(
slot,
block_root,
data_columns,
)?;
let (availability, data_columns_to_publish) = self
.data_availability_checker
.put_gossip_data_columns(slot, block_root, data_columns)?;

self.process_availability(slot, availability).await
self.process_availability(slot, availability)
.await
.map(|result| (result, data_columns_to_publish))
}

/// Checks if the provided blobs can make any cached blocks available, and imports immediately
Expand Down Expand Up @@ -3440,7 +3481,13 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
slot: Slot,
block_root: Hash256,
custody_columns: DataColumnSidecarList<T::EthSpec>,
) -> Result<AvailabilityProcessingStatus, BlockError<T::EthSpec>> {
) -> Result<
(
AvailabilityProcessingStatus,
DataColumnsToPublish<T::EthSpec>,
),
BlockError<T::EthSpec>,
> {
// Need to scope this to ensure the lock is dropped before calling `process_availability`
// Even an explicit drop is not enough to convince the borrow checker.
{
Expand All @@ -3465,13 +3512,16 @@ impl<T: BeaconChainTypes> BeaconChain<T> {

// This slot value is purely informative for the consumers of
// `AvailabilityProcessingStatus::MissingComponents` to log an error with a slot.
let availability = self.data_availability_checker.put_rpc_custody_columns(
block_root,
slot.epoch(T::EthSpec::slots_per_epoch()),
custody_columns,
)?;
let (availability, data_columns_to_publish) =
self.data_availability_checker.put_rpc_custody_columns(
block_root,
slot.epoch(T::EthSpec::slots_per_epoch()),
custody_columns,
)?;

self.process_availability(slot, availability).await
self.process_availability(slot, availability)
.await
.map(|result| (result, data_columns_to_publish))
}

/// Imports a fully available block. Otherwise, returns `AvailabilityProcessingStatus::MissingComponents`
Expand Down Expand Up @@ -3522,6 +3572,8 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
);
}

// TODO(das) record custody column available timestamp

// import
let chain = self.clone();
let block_root = self
Expand Down Expand Up @@ -6895,6 +6947,15 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
&& self.spec.is_peer_das_enabled_for_epoch(block_epoch)
}

/// Returns true if we should issue a sampling request for this block
/// TODO(das): check if the block is still within the da_window
pub fn should_sample_slot(&self, slot: Slot) -> bool {
self.config.enable_sampling
&& self
.spec
.is_peer_das_enabled_for_epoch(slot.epoch(T::EthSpec::slots_per_epoch()))
}

pub fn logger(&self) -> &Logger {
&self.log
}
Expand Down
4 changes: 2 additions & 2 deletions beacon_node/beacon_chain/src/blob_verification.rs
Original file line number Diff line number Diff line change
Expand Up @@ -409,8 +409,8 @@ pub fn validate_blob_sidecar_for_gossip<T: BeaconChainTypes>(
// Verify that the blob_sidecar was received on the correct subnet.
if blob_index != subnet {
return Err(GossipBlobError::InvalidSubnet {
expected: blob_index,
received: subnet,
expected: subnet,
received: blob_index,
});
}

Expand Down
Loading

0 comments on commit bddd3d9

Please sign in to comment.