-
-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] full-chain membership proof++ integration #9436
Draft
j-berman
wants to merge
127
commits into
monero-project:master
Choose a base branch
from
j-berman:fcmp++
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Identified by kayabaNerve, patch suggested by j-berman.
- When retrieving last chunks, set next_start_child_chunk_index so can know the correct start index without needing to modify the offset - Other smaller cleanup
- trim_tree now re-adds trimmed outputs back to the locked outputs table. remove_output then deletes from the locked output table. - Since outputs added to the tree in a specific block may have originated from distinct younger blocks (thanks to distinct unlock times), we need to store the 8 byte output_id in the leaves table as well, so that in the event of a reorg, upon removing outputs from the tree we can add them back to the locked outputs table in the correct order.
- Save 8 bytes per leaf by using DUPFIXED table and dummy "zerokval" key and attaching leaf_idx as prefix to data to serve as DUPSORT key
- fixes usage of MDB_NEXT and MDB_NEXT_DUP, allowing the db call to set key and value
- speeds up trim_tree test by 60%+
- If the output is invalid/unspendable, upon unlock it will be deleted from the locked outputs table and then won't be used to grow the tree. Upon reorg/pop blocks, the invalid output won't be re-added to the locked outputs table upon trimming the tree. Thus, it's possible for an invalid/unspendable output to not be present in the locked outputs table upon remove.
- If locked output migration step completes, then program exits while migration step to grow the tree is in progress, make sure the migration picks back up where it left off growing the tree. - Make sure db cursor gets set in all cases when renaming block infn table.
- Removing the sign bit from key images enables an optimization for fcmp's. - If an fcmp includes a key image with sign bit cleared,while the same key image with sign bit set exists in the chain already via a ring signature, then the fcmp would be a double spend attempt and the daemon must be able to detect and reject it. - In order for the daemon to detect such double spends, upon booting the daemon, we clear the sign bit from all key images already in the db. We also make sure that all key images held in memory by the pool have sign bit cleared as well. - Key images with sign bit cleared are a new type: `crypto::key_image_y`. The sign bit can be cleared via `crypto::key_image_to_y`. - The `_y` denotes that the encoded point is now the point's y coordinate. - In order to maintain backwards compatibility with current RPC consumers, the daemon keeps track of which key images have sign bit cleared and not, so that upon serving `spent_key_image_info::id_hash`, the daemon can re-construct the original key image and serve it to clients.
- plus slightly cleaner hash
Speeds up inverting many elems at once 95%+
Naming suggestion from @kayabaNerve
- Moved functions around in unit_tests/curve_trees.{h,cpp} to ease using the in-memory Global tree across tests - Introduced PathV1 struct, which is a path in the tree containing whole chunks at each layer - Implemented functions to get_path_at_leaf_idx and get_tree_root on in-memory Global tree
- Cleanly separate logic to set the hash_offset that we use when calling hash_trim and hash_grow from the logic used to determine which old child values we need from the tree - The core logic error was not properly setting the range of children needed from the tree when need_last_chunk_remaining_children is true. The fix makes sure to use the correct range, and to set hash_offset appropriately for eveery case. - In the case that get_next_layer_reduction doesn't actually need to do any hashing, only tell the caller to trim to boundary, the function now short-circuits and doesn't continue with hashing
- batch_start is the simplest function to use to resize db, since resizing requires no active txns. - batch_stop makes sure no active txns. - need to decrement txns before calling migrate() so that do_resize does not deadlock in wait_no_active_txns
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is a WIP draft PR for the full-chain membership proof (fcmp++) integration. It's roughly following section 6 of the specification written by @kayabaNerve (paper, commit).
Checklist of items expected in this PR:
grow_tree
algorithmtrim_tree
algorithmThe above checklist does not include all items required to complete the integration.
I plan to divide the code into commits where each subsequent commit builds off the prior commit. I could eventually close this PR in favor of smaller PR's that can be reviewed sequentially and in isolation.
This PR description can function as living documentation for the code as work on the integration progresses (and audits/fcmp++ research progress in parallel) . In this description, I highlight the most critical components from the code, aiming to make the PR as a whole easier to understand. Thoughts/feedback is welcome at any time.
A. Rust FFI
Since much of the full-chain membership proof++ code is written in Rust, this PR implements a Foreign Function Interface (FFI) to call the Rust code from C++. Using cmake, the Rust code is compiled into a static lib (
libfcmp_pp_rust.a
) when you runmake
from the root of the monero repo. The static lib's functions are exposed via the C++src/fcmp_pp/fcmp++.h
header file (generated with the help of cbindgen and modified slightly). The heavy lifting on the Rust side is done in @kayabaNerve'sfull-chain-membership-proofs
Rust crate; the Rust handles the math on the Helios and Selene curves, and fcmp++ construction and verification.Here is what the structure looks like at time of writing:
B. Curve trees merkle tree
The curve trees merkle tree is a new store for spendable transaction outputs in the chain. fcmp++'s work by proving you own (and can spend) an output in the tree, without revealing which output is yours. All existing valid cryptonote outputs will be inserted into the tree as soon as the outputs unlock. Once an output is in the tree, users can construct fcmp++'s with that output. Thus, the anon set will roughly be the entire chain since genesis.
The leaves in the tree are composed of output tuples
{O.x, I.x, C.x}
, and each layer after the leaf layer is composed of hashes of chunks of the preceding layer, as follows:Each layer is composed of points alternating on two curves (@tevador's proposed Selene and Helios curves). The leaves are Selene scalars (we convert ed25519 points to Selene scalars), the layer after leaves is composed of points on the Selene curve (we hash chunks of Selene scalars from the leaf layer to get this layer's Selene points), the following layer is composed of points on the Helios curve (we convert the prior layer's Selene points to Helios scalars, and hash chunks of those Helios scalars to get this layer's Helios points), the following layer is composed of points on the Selene curve (we convert the prior layer's Helios points to Selene scalars, and hash chunks of those Selene scalars to get this layer's Selene points), and so on. We continue until there is just one chunk in a layer to hash, leaving us with the tree root.
Each curve has a defined chunk width used when hashing the children in the preceding layer. The final layer has a single element in it: the root.
There are 3 critical steps to growing the tree:
a. Curve trees merkle tree: Preparing locked outputs for insertion to the tree upon unlock
We first need to determine the block in which outputs unlock. We keep track of locked outputs by unlock block in the database so we can grow the tree in the block they unlock.
Take note of the function:
get_outs_by_unlock_block
. Upon adding a block, we iterate over all the block's tx outputs in order, and place the outputs in the containerOutputsByUnlockBlock = std::unordered_map<uint64_t, std::vector<OutputContext>>
. Theuint64_t
is the output's unlock height. The output's unlock height is calculated using the newget_unlock_block_index
function.get_unlock_block_index
is documented further below. Thestd::vector<OutputContext>
for each unlock height should be sorted in the order outputs appear in the chain.Upon adding a block, we'll add those outputs to the database here:
LMDB table changes are documented further below in section A.d.
get_unlock_block_index
The idea behind this function is to have a deterministic and efficient method of growing the tree when outputs unlock.
Most outputs in the chain don't include an
unlock_time
; those outputs unlock 10 blocks after they are included in the chain.Some outputs include an
unlock_time
which should either be interpreted as the height at which an output should unlock, or the time at which an output should unlock. When theunlock_time
should be interpreted as height, the response toget_unlock_block_index
is trivial. When interpreted as time, the logic is less straightforward. In this PR, as proposed by @kayabaNerve, I use the prior hard fork's block and time as an anchor point, and determine the unlock block from that anchor point. By converting timestampedunlock_time
to a deterministic unlock block, we avoid needing to search for outputs that unlock by timestamp.Note it is possible (likely) for the returned
unlock_block_index
to be distinct from current consensus' enforced unlock block for timestamp-based locked outputs only. The proposal is for consensus to enforce this new rule for fcmp++'s (users won't be able to construct fcmp's until outputs unlock according to the rules ofget_unlock_block_index
).Note:
get_unlock_block_index
fromunlock_time
is not in production form as is. The calculation should account for:b. Curve trees merkle tree:
grow_tree
This function takes a set of new outputs and uses them to grow the tree.
It has 3 core steps:
Steps 1 and 3 are fairly straightforward. Step 2 carries the most weight and is the most complex. It's implemented in the
CurveTrees
classget_tree_extension
function documented further below.This step-wise approach enables clean separation of the db logic (steps 1 and 3) from the grow logic (step 2). In my view, this separation enables cleaner, more efficient code, and stronger testing. It also enables reusable tree building code for wallet scanning.
get_tree_extension
get_tree_extension
has 2 core steps:Prepare new leaves for insertion into the tree.
a. Sort new outputs by the order they appear in the chain (guarantees consistent insertion order in the tree).
b. Convert valid outputs to leaf tuples (from the form
{output_pubkey,commitment}
to{O,I,C}
to{O.x,I.x,C.x}
).output_pubkey
orcommitment
that are not on the ed255129 curve, or are equal to identity after clearing torsion.CurveTrees<Helios, Selene>::LeafTuple CurveTrees<Helios, Selene>::leaf_tuple
function for the code.c. Place all leaf tuple members in a flat vector (
[{output 0 output pubkey and commitment}, {output 1 output pubkey and commitment},...]
becomes[O.x,I.x,C.x,O.x,I.x,C.x,...]
).Go layer by layer, hashing chunks of the preceding layer, and place results in the
TreeExtension
struct.a. Get
GrowLayerInstructions
for the current layer.GrowLayerInstructions
for the layer after the leaf layer is distinct from all other layers after.old_total_children
,new_total_children
,parent_chunk_width
, and a bool for whether or not thelast_child_will_change
, we can determine how exactly we expect a layer to grow.b. Get the
LayerExtension
for the current layer to add to theTreeExtension
struct.GrowLayerInstructions
to determine correct values when hashing the preceding "child" layer.c. Curve trees merkle tree:
trim_tree
This function trims the provided number of leaf tuples from the tree.
The function has 5 core steps:
TrimLayerInstructions
, which we can use to know how to trim each layer in the tree.TreeReduction
struct, which we can use to trim the tree.TreeReduction
struct to trim the tree.Step 1 is straightforward.
Step 2 carries the most weight and is the most complex. It's implemented in the
CurveTrees
classget_trim_instructions
function documented further below.In step 3, the "new last chunk in each layer" is referring to what will become the new last chunk in a layer after trimming that layer. We need values from those existing chunks in order to correctly and efficiently trim the chunk.
Step 4 is also complex, and is implemented in the
CurveTrees
classget_tree_reduction
function documented further below.In step 5, we also make sure to re-add any trimmed outputs back to the locked outputs table. We only trim the tree 1 block at a time. Therefore any trimmed outputs must necessarily be re-locked upon removal from the tree.
Like for
grow_tree
this step-wise approach enables clean separation of db logic (steps 1, 3, 5) from the trim logic (steps 2 and 4).get_trim_instructions
This function first gets instructions for trimming the leaf layer, then continues getting instructions for each subsequent layer until reaching the root.
The function doing the heavy lifting is:
Similar to growing a layer, there are edge cases to watch out for when trimming a layer:
This function captures these edge cases and outputs a struct that tells the caller how exactly to handle them.
get_tree_reduction
This function iterates over all layers, outputting a
LayerReduction
struct for each layer, which is a very simple struct we can use to trim a layer in the tree:It uses each layer's
TrimLayerInstructions
from above as a guide, dictating exactly what data to use to calculate a new last hash for each layer.d. Curve trees merkle tree: LMDB changes
The following changes to the db are necessary in order to store and update the curve trees merkle tree.
NEW:
locked_outputs
tablePotential outputs to be inserted into the merkle tree, indexed by the block ID in which the outputs unlock.
We store the ouput ID to guarantee outputs are inserted into the tree in the order they appear in the chain.
This table stores the output pub key and commitment (64 bytes) instead of
{O.x,I.x,C.x}
, since{O.x,I.x,C.x}
(96 bytes) can be derived from the output pub key and commitment, saving 32 bytes per output. Note that we should theoretically be able to stop storing the output public key and commitment in theoutput_amounts
table at the hard fork, since that table should only be useful to construct and verify pre-fcmp++ txs.NEW:
leaves
tableLeaves in the tree.
We store the output ID so that when we trim the tree, we know where to place the output back into the locked outputs table.
Same as above: this table stores the output pub key and commitment (64 bytes) instead of
{O.x,I.x,C.x}
, since{O.x,I.x,C.x}
(96 bytes) can be derived from the output pub key and commitment, saving 32 bytes per output.Note that we must save the output pub key for outputs in the chain before the fork that includes fcmp++, since we need to derive
I
from the pre-torsion cleared points. After the fork, we can store torsion cleared valid{O,C}
pairs instead if we ban torsioned outputs and commitments at consensus, or if we redefine hash to point to use torsion clearedO.x
as its input.Note we also use the dummy zerokval key optimization for this table as explained in this comment:
NEW:
layers
tableEach record is a 32 byte hash of a chunk of children, as well as that hash's position in the tree.
The
layer_idx
is indexed starting at the layer after the leaf layer (i.e.layer_idx=0
corresponds to the layer after the leaf layer).Example:
{layer_idx=0, child_chunk_idx=4, child_chunk_hash=<31fa...>}
means that thechild_chunk_hash=<31fa...>
is a hash of the 5th chunk of leaves, and is a Selene point. Another example:{layer_idx=1, child_chunk_idx=36, child_chunk_hash=<a2b5...>}
means that thechild_chunk_hash=<a2b5...>
is a hash of the 37th chunk of elements fromlayer_idx=0
, and is a Helios point.An even
layer_idx
corresponds to Selene points. An oddlayer_idx
corresponds to Helios points.The element with the highest
layer_idx
is the root (which should also be the last element in the table). There should only be a single element with the highestlayer_idx
(i.e. only one data item with key == maxlayer_idx
).UPDATED:
block_info
tableNew fields:
bi_n_leaf_tuples
- the number of leaf tuples in the tree at that height.bi_tree_root
- the root hash of the tree at that height. It is a (compressed) Helios point or Selene point, which can be determined from the number of leaf tuples in the tree.e. Curve trees merkle tree: Growing the tree as the node syncs
At each block, the tree must grow with (valid) outputs that unlock in that block. In the
add_block
function indb_lmdb.cpp
, note the following:Then when adding the block, we get the number of leaf tuples in the tree and tree root and store them on each block info record:
Finally, we use the container mentioned above to place the locked outputs from that block in a "staging"
locked_outputs
table, ready to be used to grow the tree once they unlock.Comments
f. Curve trees merkle tree: Migrating cryptonote outputs into the tree
All existing cryptonote outputs need to be migrated into the merkle tree.
locked_outputs
table.g. Curve trees merkle tree: Key image migration
Removing the sign bit from key images enables an optimization for fcmp's (refer to the specification paper for further details on the optimization). If an fcmp includes a key image with sign bit cleared, while the same key image with sign bit set exists in the chain already via a ring signature, then the fcmp would be a double spend attempt and the daemon must be able to detect and reject it. In order for the daemon to detect such double spends, upon booting the daemon, we clear the sign bit from all key images already in the db. All key images inserted to the db have their sign bit cleared before insertion, and the db prevents duplicates. We also make sure that all key images held in memory by the pool have sign bit cleared (see
key_images_container
). Transactions must have unique key images with sign bit cleared too (seecheck_tx_inputs_keyimages_diff
). Key images with sign bit cleared are a new type:crypto::key_image_y
. The sign bit can be cleared viacrypto::key_image_to_y
. The_y
denotes that the encoded point is now the point's y coordinate.This PR aims to avoid a breaking change to the
COMMAND_RPC_GET_TRANSACTION_POOL
endpoint, which currently serves key images in the pool via thespent_key_image_info::id_hash
response field. The PR does this by making sure the pool keeps track of the sign bit for eachcrypto::key_image_y
held in the pool. The daemon still prevents duplicatecrypto::key_image_y
from entering the pool (except in the case of reorgs as is currently the case), but upon serving the response toCOMMAND_RPC_GET_TRANSACTION_POOL
, the daemon re-derives thecrypto::key_image
usingcrypto::key_image_y
and the sign bit, and serves this originalcrypto::key_image
viaspent_key_image_info::id_hash
. Note that it is possible for two distinctid_hash
of the samekey_image_y
to exist, but thekey_image
has sign bit set for oneid_hash
and sign bit cleared for the otherid_hash
(thus 2 distinctid_hash
's). This would be possible if during a grace period that allows both fcmp's and ring signatures, there exists an alternate chain where a user constructs an fcmp spending an output, and an alternate chain where a user constructs a ring signature spending the same output and the key image has sign bit set.TODO: tests for this grace period scenario.
h. Curve trees merkle tree: Trim the tree on reorg and on pop blocks
BlockchainLMDB::remove_block()
.BlockchainLMDB::remove_block()
, after removing the block from the block info table, we callBlockchainLMDB::trim_tree
with the number of leaves to trim and the block id which we're trimming.output_id
to re-insert the output into the locked outputs table in the correct order.BlockchainLMDB::remove_block()
, the daemon removes all of the block's transactions from the db viaBlockchainLMDB::remove_transaction
.BlockchainLMDB::remove_transaction
isBlockchainLMDB::remove_output
, which is called for all of a tx's outputs.BlockchainLMDB::remove_output
we remove the output from the locked outputs table if it's present.BlockchainLMDB::trim_tree
.C. Transaction struct changes for fcmp++
cryptonote::transaction::rctSig
rctSigBase
Added a new
RCTType
enum usable in thetype
member ofrctSigBase
:RCTTypeFcmpPlusPlus = 7
fcmp++ txs are expected to use this
RCTType
instead ofRCTTypeBulletproofPlus
(even though fcmp++ txs are still expected to have a bp+ range proof).Added a new member to
rctSigBase
:crypto::hash referenceBlock; // block containing the merkle tree root used for the tx's fcmp++
This member is only expected present on txs of
rctSigBase.type == RCTTypeFcmpPlusPlus
.rctSigPrunable
Added 2 new members:
Note there is a single opaque fcmp++ struct per tx. The
FcmpPpProof
type is simply astd::vector<uint8_t>
. The length of theFcmpPpProof
is deterministic from the number of inputs in the tx and curve trees merkle tree depth. Thus, when serializing and de-serializing, we don't need to store the vector length, and can expect a deterministic number of bytes for theFcmpPpProof
by callingfcmp_pp::proof_len(inputs, curve_trees_tree_depth)
.Comments
tx_fcmp_pp
serialization test demonstrates what an expected dummytransaction
struct looks like with dummy data.D. Constructing fcmp++ transactions
TODO
E. Verifying fcmp++ transactions
TODO
F. Consensus changes for fcmp++
TODO