-
Notifications
You must be signed in to change notification settings - Fork 163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/evm devnet ready #772
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good, a few areas I think could be improved / explored
Ok(FrontierPartialComponents { | ||
filter_pool: Some(Arc::new(Mutex::new(BTreeMap::new()))), | ||
fee_history_cache: Arc::new(Mutex::new(BTreeMap::new())), | ||
fee_history_cache_limit: config.fee_history_limit, | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be good do discuss the performance implications of having a BTreeMap
behind an arc+mutex. Perhaps something like a dashmap would be more appropriate if key ordering doesn't matter? Ideally we use something with more fine-grained locking I would think, though I'm not at all familiar with the access pattern here so this might be reasonable. What is the access pattern?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Didn't think about it, tbh. This comes from a Polkadot-SDK repo:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gotcha. The litmus test would be lots of contention => dashmap
node/src/ethereum.rs
Outdated
pub async fn spawn_frontier_tasks( | ||
task_manager: &TaskManager, | ||
client: Arc<FullClient>, | ||
backend: Arc<FullBackend>, | ||
frontier_backend: Arc<FrontierBackend<FullClient>>, | ||
filter_pool: Option<FilterPool>, | ||
storage_override: Arc<dyn StorageOverride<Block>>, | ||
fee_history_cache: FeeHistoryCache, | ||
fee_history_cache_limit: FeeHistoryCacheLimit, | ||
sync: Arc<SyncingService<Block>>, | ||
pubsub_notification_sinks: Arc< | ||
fc_mapping_sync::EthereumBlockNotificationSinks< | ||
fc_mapping_sync::EthereumBlockNotification<Block>, | ||
>, | ||
>, | ||
) { | ||
// Spawn main mapping sync worker background task. | ||
match &*frontier_backend { | ||
fc_db::Backend::KeyValue(b) => { | ||
task_manager.spawn_essential_handle().spawn( | ||
"frontier-mapping-sync-worker", | ||
Some("frontier"), | ||
fc_mapping_sync::kv::MappingSyncWorker::new( | ||
client.import_notification_stream(), | ||
Duration::new(6, 0), | ||
client.clone(), | ||
backend, | ||
storage_override.clone(), | ||
b.clone(), | ||
3, | ||
0, | ||
fc_mapping_sync::SyncStrategy::Normal, | ||
sync, | ||
pubsub_notification_sinks, | ||
) | ||
.for_each(|()| future::ready(())), | ||
); | ||
} | ||
fc_db::Backend::Sql(b) => { | ||
task_manager.spawn_essential_handle().spawn_blocking( | ||
"frontier-mapping-sync-worker", | ||
Some("frontier"), | ||
fc_mapping_sync::sql::SyncWorker::run( | ||
client.clone(), | ||
backend, | ||
b.clone(), | ||
client.import_notification_stream(), | ||
fc_mapping_sync::sql::SyncWorkerConfig { | ||
read_notification_timeout: Duration::from_secs(30), | ||
check_indexed_blocks_interval: Duration::from_secs(60), | ||
}, | ||
fc_mapping_sync::SyncStrategy::Parachain, | ||
sync, | ||
pubsub_notification_sinks, | ||
), | ||
); | ||
} | ||
} | ||
|
||
// Spawn Frontier EthFilterApi maintenance task. | ||
if let Some(filter_pool) = filter_pool { | ||
// Each filter is allowed to stay in the pool for 100 blocks. | ||
const FILTER_RETAIN_THRESHOLD: u64 = 100; | ||
task_manager.spawn_essential_handle().spawn( | ||
"frontier-filter-pool", | ||
Some("frontier"), | ||
EthTask::filter_pool_task(client.clone(), filter_pool, FILTER_RETAIN_THRESHOLD), | ||
); | ||
} | ||
|
||
// Spawn Frontier FeeHistory cache maintenance task. | ||
task_manager.spawn_essential_handle().spawn( | ||
"frontier-fee-history", | ||
Some("frontier"), | ||
EthTask::fee_history_task( | ||
client, | ||
storage_override, | ||
fee_history_cache, | ||
fee_history_cache_limit, | ||
), | ||
); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this looks good, but we should probably think about adding a mechanism for handling task failures... if any of these tasks fail, it could leave the node in an inconsistent state. Logging and retry mechanisms could be useful here. Basically would feel safer if it was more self-healing
node/src/ethereum.rs
Outdated
graph, | ||
converter, | ||
is_authority, | ||
enable_dev_signer, | ||
network, | ||
sync, | ||
frontier_backend, | ||
storage_override, | ||
block_data_cache, | ||
filter_pool, | ||
max_past_logs, | ||
fee_history_cache, | ||
fee_history_cache_limit, | ||
execute_gas_limit_multiplier, | ||
forced_parent_hashes, | ||
pending_create_inherent_data_providers, | ||
} = deps; | ||
|
||
let mut signers = Vec::new(); | ||
if enable_dev_signer { | ||
signers.push(Box::new(EthDevSigner::new()) as Box<dyn EthSigner>); | ||
} | ||
|
||
io.merge( | ||
Eth::<B, C, P, CT, BE, A, CIDP, EC>::new( | ||
client.clone(), | ||
pool.clone(), | ||
graph.clone(), | ||
converter, | ||
sync.clone(), | ||
signers, | ||
storage_override.clone(), | ||
frontier_backend.clone(), | ||
is_authority, | ||
block_data_cache.clone(), | ||
fee_history_cache, | ||
fee_history_cache_limit, | ||
execute_gas_limit_multiplier, | ||
forced_parent_hashes, | ||
pending_create_inherent_data_providers, | ||
Some(Box::new(AuraConsensusDataProvider::new(client.clone()))), | ||
) | ||
.replace_config::<EC>() | ||
.into_rpc(), | ||
)?; | ||
|
||
if let Some(filter_pool) = filter_pool { | ||
io.merge( | ||
EthFilter::new( | ||
client.clone(), | ||
frontier_backend.clone(), | ||
graph.clone(), | ||
filter_pool, | ||
500_usize, // max stored filters | ||
max_past_logs, | ||
block_data_cache.clone(), | ||
) | ||
.into_rpc(), | ||
)?; | ||
} | ||
|
||
io.merge( | ||
EthPubSub::new( | ||
pool, | ||
client.clone(), | ||
sync, | ||
subscription_task_executor, | ||
storage_override.clone(), | ||
pubsub_notification_sinks, | ||
) | ||
.into_rpc(), | ||
)?; | ||
|
||
io.merge( | ||
Net::new( | ||
client.clone(), | ||
network, | ||
// Whether to format the `peer_count` response as Hex (default) or not. | ||
true, | ||
) | ||
.into_rpc(), | ||
)?; | ||
|
||
io.merge(Web3::new(client.clone()).into_rpc())?; | ||
|
||
io.merge( | ||
Debug::new( | ||
client.clone(), | ||
frontier_backend, | ||
storage_override, | ||
block_data_cache, | ||
) | ||
.into_rpc(), | ||
)?; | ||
|
||
#[cfg(feature = "txpool")] | ||
io.merge(TxPool::new(client, graph).into_rpc())?; | ||
|
||
Ok(io) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I recommend breaking this up into helper functions, especially if we think this will grow to be even longer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, please review @sam0x17
runtime/src/precompiles/mod.rs
Outdated
pub fn bytes_to_account_id(account_id_bytes: &[u8]) -> Result<AccountId32, PrecompileFailure> { | ||
AccountId32::from_slice(account_id_bytes).map_err(|_| { | ||
log::info!("Error parsing account id bytes {:?}", account_id_bytes); | ||
PrecompileFailure::Error { | ||
exit_status: ExitError::InvalidRange, | ||
} | ||
}) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be a TryFrom
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, please review @sam0x17
Mz/feat/staking precompile
parameter_types! { | ||
pub const SubtensorChainId: u64 = 0x03B1; // Unicode for lowercase alpha | ||
// pub const SubtensorChainId: u64 = 0x03C4; // Unicode for lowercase tau | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gregzaitsev is this ok?
Description
Add EVM functionality to the chain
Related Issue(s)
n/a
Type of Change
Breaking Change
n/a
Checklist
cargo fmt
andcargo clippy
to ensure my code is formatted and linted correctlyScreenshots (if applicable)
n/a
Additional Notes
Testing can be done with instructions here:
https://github.com/gztensor/evm-demo
fixes #862