Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: basic global contract support #12155

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

bowenwang1996
Copy link
Collaborator

A first step towards near/NEPs#556. This PR implements the following:

  • Global contract shard that stores account id -> global contract mapping. It uses the existing MPT state implementation with state root included in block headers.
  • DeployPermanentContractAction that allows an account to deploy a global contract. The contract is immediately compiled upon deployment and only gets deployed if it can be compiled successfully.
  • The logic of propagating the deployment information from runtime to chain to update global contract root and back from chain to runtime to charge properly for new transactions.

A few important things that are not implemented in this PR:

  • DeployExistingContractAction which references a global contract
  • The logic of actually propagating deployed global contract code across the network to each node.

This PR is light on testing because DeployExistingContractAction is not yet implemented and it is hard to test on the interface level as a result. Most of the testing focuses on whether global contract root is properly updated.

Copy link
Contributor

@wacban wacban left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is a big PR ;)
Looks good but I only did a high level review.

random thoughts:

  • gc, cold storage support are not really needed as the contracts are forever
  • on distribution side we'll need to cover two cases
    • continuous distribution of the changes
    • something similar to state sync for a fresh node to get the entire cache in one go
  • It would be nice to have the global shard a bit more generic than just contracts, there is a good chance we may want to reuse it later.
  • nit: I prefer global over permanent it's shorter and it's the most common name for this feature

Comment on lines +410 to +411
let trie_key = TrieKey::Account { account_id: account_id.clone() };
trie_update.set(trie_key, contract_hash.into());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feels a bit hacky. The account key in regular trie stores values in the Account format. Here you use a different format but using the same trie key.

Also I don't quite see why do you need the account id in the contract shard at all. I thought the whole point is to allow any account to use the contract. In that case it would make sense to use the contract hash as the key.

I don't think the type byte is needed at all now but it may be a good idea to introduce it just in case we want to add other type of keys in the future. Still I think the trie_keys for regular shards and the global contract shard should be separate.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 for not using the existing key with different type of value.

Could we use a new TrieKey for this, or TrieKey::ContractCode {account_id} for this? (though for the latter the value is the code itself, not the hash of it).

fn update_global_shard(
&mut self,
block: &Block,
protocol_version: ProtocolVersion,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I think it would be a bit cleaner to move the protocol version check outside of this method. Currently just by looking at the method signature it's not clear if None is a valid return value that means "no changes" or if it's only valid when this feature is disabled.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

Comment on lines +1216 to +1218
// first read from DB to get the slice. Then try to parse it as `BlockExtra`.
// If that parsing fails, try again to parse it as `VersionedBlockExtra`.
// This avoids the migrate the column `DBCol::BlockExtra`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not migration?

Is it certain that a VersionedBlockExtra cannot be incorrectly parsed as BlockExtra? Is it an invariant that we can ensure holds in the future versions of VersionedBlockExtra?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another thought: Currently BlockExtra only contains ChallengesResult. Is that ever used? If not, could we simply clean out existing data and start writing VersionedBlockExtra only?

Comment on lines +127 to +128
/// New permanent contracts metadata.
pub permanent_contracts_metadata: Vec<(AccountId, CryptoHash)>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Would be nice to add changes somewhere in the name though it's quite long already.

@@ -926,6 +930,7 @@ impl Client {
outgoing_receipts_root,
tx_root,
congestion_info,
chunk_extra.permanent_contracts_metadata().to_vec(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Why is to_vec() needed? It is already a vector.

@@ -674,6 +676,38 @@ pub(crate) fn action_deploy_contract(
Ok(())
}

/// Compiles the permanent contract and store the compiled module in the cache.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it a common practice to compile contracts on deployment? Is it meant to filter out invalid contracts?

Comment on lines +163 to +164
fees.fee(ActionCosts::deploy_contract_base).send_fee(sender_is_receiver)
+ fees.fee(ActionCosts::deploy_contract_byte).send_fee(sender_is_receiver)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think should you be using the costs for the global contracts here.

Comment on lines +261 to +262
fees.fee(ActionCosts::deploy_contract_base).exec_fee()
+ fees.fee(ActionCosts::deploy_contract_byte).exec_fee() * num_bytes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

@@ -0,0 +1 @@
storage_contract_code_burnt_amount_per_byte: { old: 0, new: 100_000_000_000_000_000_000}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the cost is quite high, does it implicitly impose a smaller limit on the contract size?

@@ -150,6 +157,13 @@ pub fn total_send_fees(
&delegate_action.receiver_id,
)?
}
#[cfg(feature = "protocol_feature_global_contracts")]
DeployPermanentContract(DeployPermanentContractAction { code }) => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about the other new action? deploy existing contract?

Copy link
Contributor

@tayfunelmas tayfunelmas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments. In general looks good. One thing needs discussion is the part about deploying from existing contract using contract hash and ContractIdentifier part.

@@ -208,6 +214,8 @@ impl<'a> ChainUpdate<'a> {
balance_burnt,
// TODO(congestion_control) - integration with resharding
None,
// TODO: integrate with resharding
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we add some issue ID to these TODOs to locate/filter later?
eg. // TODO(#1234): ...

&mut self,
block: &Block,
protocol_version: ProtocolVersion,
) -> Result<Option<CryptoHash>, Error> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit. Use StateRoot type instead of CryptoHash (same type but cleaner).

fn update_global_shard(
&mut self,
block: &Block,
protocol_version: ProtocolVersion,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

Comment on lines +410 to +411
let trie_key = TrieKey::Account { account_id: account_id.clone() };
trie_update.set(trie_key, contract_hash.into());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 for not using the existing key with different type of value.

Could we use a new TrieKey for this, or TrieKey::ContractCode {account_id} for this? (though for the latter the value is the code itself, not the hash of it).

let protocol_version =
self.epoch_manager.get_epoch_protocol_version(block.header().epoch_id())?;

if let Some(state_root) = self.update_global_shard(block, protocol_version)? {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer this method returning (StateRoot, WrappedTrieChanges) pair and self.chain_store_update.save_trie_changes be called here (together with save_block_extra), to match the rest of the code here and be clear what is saved in chain_store.

/// If account id is specified, then it means that the contract deployed by a specific account will always be used
/// In this case the contract might change since the deployer account can decide to update the contract.
/// If contract hash is used, then it means that the contract specified by this hash will be used. In this case the
/// contract deployed cannot be changed by an external account.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the latter case (using contract hash), I would expect that the deployment would turn out to be local deployment of that code (ie. like issuing a DeployContractAction with the given code but not changing the global shard but updating the shard of that account). Then the only contract identifier would be AccountId. In that case, an account A can deploy a permanent contract and another account B can deploy that existing contract in global shard. Whenever A changes the contract, B will start using the new code. If B wants its contract not changed, it will deploy that code to its own shard like another other DeployContractAction. In this case, all contracts deployed in the global shard are shared, if the deployer account updates it, everyone gets the update.

tx_root,
prev_validator_proposals,
congestion_info,
new_permanent_contracts: permanent_contracts_metadata,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we could make it shorter permanent_contracts but use a specific type for the (accountid, hash) pairs like PermanentContractsMetadata. see my other comment about representing the list of these pairs with a specific struct type.

#[derive(Debug, PartialEq, BorshSerialize, BorshDeserialize, Clone, Eq, ProtocolSchema)]
pub struct BlockExtra {
pub challenges_result: ChallengesResult,
}

#[derive(Debug, PartialEq, BorshSerialize, BorshDeserialize, Clone, Eq, ProtocolSchema)]
pub struct BlockExtraV1 {
pub global_shard_state_root: CryptoHash,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see my other comment about ChallengesResult, if it is not used before, maybe we can cleanup the existing BlockExtra and only use the new type (I expect it will not impact the protocol and hashing of other types).

}
}

pub enum GlobalTrieKey {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not extend existing TrieKey enum for this (with new variant names).

@@ -586,6 +586,8 @@ pub enum ActionErrorKind {
DelegateActionNonceTooLarge { delegate_nonce: Nonce, upper_bound: Nonce },
/// Non-refundable storage transfer to an existing account is not allowed according to NEP-491.
NonRefundableTransferToExistingAccount { account_id: AccountId },
/// An error occurs during the deployment of a contract. Currently only used for permanent contract deployment errors due to legacy reasons.
CompilationError { account_id: AccountId, err_msg: String },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also add the hash of the code

@@ -439,12 +460,23 @@ impl ShardChunkHeader {
}
}

#[inline]
pub fn permanent_contracts_metadata(&self) -> &[(AccountId, CryptoHash)] {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to be consistent in naming, can be rename permanent_contracts_metadata to global_contracts_metadata?

tx_root,
prev_validator_proposals,
congestion_info,
new_permanent_contracts: permanent_contracts_metadata,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually can we name it global_contracts_metadata? (permanent --> global)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants