Skip to content

Commit

Permalink
Merge pull request #726 from 0xPolygonMiden/next
Browse files Browse the repository at this point in the history
Tracking PR for v0.5 release
  • Loading branch information
bobbinth authored Mar 29, 2023
2 parents 159a04f + b7fabf9 commit 4195475
Show file tree
Hide file tree
Showing 200 changed files with 3,519 additions and 2,573 deletions.
2 changes: 2 additions & 0 deletions .git-blame-ignore-revs
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
# initial run of pre-commit
7e025f9b5d0feccfc2c9b1630f951a4256024906
2 changes: 1 addition & 1 deletion .github/pull_request_template.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@
- Commit messages and codestyle follow [conventions](./CONTRIBUTING.md).
- Relevant issues are linked in the PR description.
- Tests added for new functionality.
- Documentation/comments updated according to changes.
- Documentation/comments updated according to changes.
2 changes: 1 addition & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ jobs:
matrix:
toolchain: [stable, nightly]
os: [ubuntu]
args: [--release, --doc]
args: [--profile test-release, --profile test-release --doc]
steps:
- uses: actions/checkout@main
- name: Install rust
Expand Down
43 changes: 43 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.2.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-json
- id: check-toml
- id: pretty-format-json
- id: check-added-large-files
- id: check-case-conflict
- id: check-executables-have-shebangs
- id: check-merge-conflict
- id: detect-private-key
- repo: https://github.com/hackaugusto/pre-commit-cargo
rev: v1.0.0
hooks:
# Allows cargo fmt to modify the source code prior to the commit
- id: cargo
name: Cargo fmt
args: ["+stable", "fmt", "--all"]
stages: [commit]
# Requires code to be properly formatted prior to pushing upstream
- id: cargo
name: Cargo fmt --check
args: ["+stable", "fmt", "--all", "--check"]
stages: [push, manual]
- id: cargo
name: Cargo check --all-targets
args: ["+stable", "check", "--all-targets"]
- id: cargo
name: Cargo check --all-targets --no-default-features
args: ["+stable", "check", "--all-targets", "--no-default-features"]
- id: cargo
name: Cargo check --all-targets --all-features
args: ["+stable", "check", "--all-targets", "--all-features"]
# Unlike fmt, clippy will not be automatically applied
- id: cargo
name: Cargo clippy
args: ["+nightly", "clippy", "--workspace", "--", "--deny", "clippy::all", "--deny", "warnings"]
20 changes: 20 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,25 @@
# Changelog

## 0.5.0 (2023-03-29)

#### CLI
- Renamed `ProgramInfo` to `ExecutionDetails` since there is another `ProgramInfo` struct in the source code.
- [BREAKING] renamed `stack_init` and `advice_tape` to `operand_stack` and `advice_stack` in input files.
- Enabled specifying additional advice provider inputs (i.e., advice map and Merkle store) via the input files.

#### Assembly
- Added new instructions: `is_odd`, `assert_eqw`, `mtree_merge`.
- [BREAKING] Removed `mtree_cwm` instruction.
- Added `breakpoint` instruction to help with debugging.

#### VM Internals
- [BREAKING] Renamed `Read`, `ReadW` operations into `AdvPop`, `AdvPopW`.
- [BREAKING] Replaced `AdviceSet` with `MerkleStore`.
- Updated Winterfell dependency to v0.6.0.

#### VM Internals
- [BREAKING] Renamed `Read/ReadW` operations into `AdvPop/AdvPopW`.

## 0.4.0 (2023-02-27)

#### Advice provider
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ We are using [Github Flow](https://docs.github.com/en/get-started/quickstart/git
### Branching
- The current active branch is `next`. Every branch with a fix/feature must be forked from `next`.

- The branch name should contain a short issue/feature description separated with hyphens [(kebab-case)](https://en.wikipedia.org/wiki/Letter_case#Kebab_case).
- The branch name should contain a short issue/feature description separated with hyphens [(kebab-case)](https://en.wikipedia.org/wiki/Letter_case#Kebab_case).

For example, if the issue title is `Fix functionality X in component Y` then the branch name will be something like: `fix-x-in-y`.

Expand Down
11 changes: 7 additions & 4 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,13 @@ members = [
"verifier"
]

[profile.release]
[profile.optimized]
inherits = "release"
codegen-units = 1
lto = true

[profile.bench]
codegen-units = 1
lto = true
[profile.test-release]
inherits = "release"
debug = true
debug-assertions = true
overflow-checks = true
12 changes: 10 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,5 +1,13 @@
FEATURES_INTERNALS=--features internals
FEATURES_CONCURRENT_EXEC=--features concurrent,executable
PROFILE_OPTIMIZED=--profile optimized
PROFILE_TEST=--profile test-release

bench:
cargo bench $(PROFILE_OPTIMIZED)

exec:
cargo build --release --features concurrent,executable
cargo build $(PROFILE_OPTIMIZED) $(FEATURES_CONCURRENT_EXEC)

test:
RUSTFLAGS="-C debug-assertions -C overflow-checks -C debuginfo=2" cargo test --release --features internals
cargo test $(PROFILE_TEST) $(FEATURES_INTERNALS)
38 changes: 20 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Miden VM is a zero-knowledge virtual machine written in Rust. For any program ex
* If you'd like to learn more about STARKs, check out the [references](#references) section.

### Status and features
Miden VM is currently on release v0.4. In this release, most of the core features of the VM have been stabilized, and most of the STARK proof generation has been implemented. While we expect to keep making changes to the VM internals, the external interfaces should remain relatively stable, and we will do our best to minimize the amount of breaking changes going forward.
Miden VM is currently on release v0.5. In this release, most of the core features of the VM have been stabilized, and most of the STARK proof generation has been implemented. While we expect to keep making changes to the VM internals, the external interfaces should remain relatively stable, and we will do our best to minimize the amount of breaking changes going forward.

The next version of the VM is being developed in the [next](https://github.com/0xPolygonMiden/miden-vm/tree/next) branch. There is also a documentation for the latest features and changes in the next branch [documentation next branch](https://0xpolygonmiden.github.io/miden-vm/intro/main.html).

Expand Down Expand Up @@ -81,35 +81,37 @@ When executed on a single CPU core, the current version of Miden VM operates at

| VM cycles | Execution time | Proving time | RAM consumed | Proof size |
| :-------------: | :------------: | :----------: | :-----------: | :--------: |
| 2<sup>10</sup> | 1 ms | 80 ms | 14 MB | 52 KB |
| 2<sup>12</sup> | 2 ms | 280 ms | 43 MB | 61 KB |
| 2<sup>14</sup> | 8 ms | 1.1 sec | 163 MB | 71 KB |
| 2<sup>16</sup> | 28 ms | 4.4 sec | 640 MB | 81 KB |
| 2<sup>18</sup> | 85 ms | 19.2 sec | 2.6 GB | 92 KB |
| 2<sup>20</sup> | 320 ms | 86 sec | 10 GB | 104 KB |
| 2<sup>10</sup> | 1 ms | 80 ms | 20 MB | 47 KB |
| 2<sup>12</sup> | 2 ms | 260 ms | 52 MB | 57 KB |
| 2<sup>14</sup> | 8 ms | 0.9 sec | 240 MB | 66 KB |
| 2<sup>16</sup> | 28 ms | 4.6 sec | 950 MB | 77 KB |
| 2<sup>18</sup> | 85 ms | 15.5 sec | 3.7 GB | 89 KB |
| 2<sup>20</sup> | 310 ms | 67 sec | 14 GB | 100 KB |

As can be seen from the above, proving time roughly doubles with every doubling in the number of cycles, but proof size grows much slower.

We can also generate proofs at a higher security level. The cost of doing so is roughly doubling of proving time and roughly 40% increase in proof size. In the benchmarks below, the same Fibonacci calculator program was executed on Apple M1 Pro CPU at 128-bit target security level:

| VM cycles | Execution time | Proving time | RAM consumed | Proof size |
| :-------------: | :------------: | :----------: | :-----------: | :--------: |
| 2<sup>10</sup> | 1 ms | 140 ms | 26 MB | 73 KB |
| 2<sup>12</sup> | 2 ms | 510 ms | 90 MB | 87 KB |
| 2<sup>14</sup> | 8 ms | 2.1 sec | 350 MB | 98 KB |
| 2<sup>16</sup> | 28 ms | 7.9 sec | 1.4 GB | 115 KB |
| 2<sup>18</sup> | 85 ms | 35 sec | 5.6 GB | 132 KB |
| 2<sup>20</sup> | 320 ms | 151 sec | 20.3 GB | 149 KB |
| 2<sup>10</sup> | 1 ms | 300 ms | 30 MB | 61 KB |
| 2<sup>12</sup> | 2 ms | 590 ms | 106 MB | 78 KB |
| 2<sup>14</sup> | 8 ms | 1.7 sec | 500 MB | 91 KB |
| 2<sup>16</sup> | 28 ms | 6.7 sec | 2.0 GB | 106 KB |
| 2<sup>18</sup> | 85 ms | 27.5 sec | 8.0 GB | 122 KB |
| 2<sup>20</sup> | 310 ms | 126 sec | 24.0 GB | 138 KB |

### Multi-core prover performance
STARK proof generation is massively parallelizable. Thus, by taking advantage of multiple CPU cores we can dramatically reduce proof generation time. For example, when executed on a high-end 8-core CPU (Apple M1 Pro), the current version of Miden VM operates at around 80 KHz. And when executed on a high-end 64-core CPU (Amazon Graviton 3), the VM operates at around 320 KHz.
STARK proof generation is massively parallelizable. Thus, by taking advantage of multiple CPU cores we can dramatically reduce proof generation time. For example, when executed on an 8-core CPU (Apple M1 Pro), the current version of Miden VM operates at around 100 KHz. And when executed on a 64-core CPU (Amazon Graviton 3), the VM operates at around 250 KHz.

In the benchmarks below, the VM executes the same Fibonacci calculator program for 2<sup>20</sup> cycles at 96-bit target security level:

| Machine | Execution time | Proving time |
| ------------------------------ | :------------: | :----------: |
| Apple M1 Pro (8 threads) | 320 ms | 13 sec |
| Amazon Graviton 3 (64 threads) | 390 ms | 3.3 sec |
| Machine | Execution time | Proving time | Execution % |
| ------------------------------ | :------------: | :----------: | :---------: |
| Apple M1 Pro (8 threads) | 310 ms | 9.8 sec | 3.1% |
| Apple M2 Max (16 threads) | 290 ms | 7.7 sec | 3.6% |
| AMD Ryzen 9 5950X (16 threads) | 270 ms | 10.7 sec | 2.6% |
| Amazon Graviton 3 (64 threads) | 330 ms | 3.7 sec | 9.0% |

## References
Proofs of execution generated by Miden VM are based on STARKs. A STARK is a novel proof-of-computation scheme that allows you to create an efficiently verifiable proof that a computation was executed correctly. The scheme was developed by Eli Ben-Sasson, Michael Riabzev et al. at Technion - Israel Institute of Technology. STARKs do not require an initial trusted setup, and rely on very few cryptographic assumptions.
Expand Down
10 changes: 5 additions & 5 deletions air/Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[package]
name = "miden-air"
version = "0.4.0"
version = "0.5.0"
description = "Algebraic intermediate representation of Miden VM processor"
authors = ["miden contributors"]
readme = "README.md"
Expand Down Expand Up @@ -28,10 +28,10 @@ default = ["std"]
std = ["vm-core/std", "winter-air/std"]

[dependencies]
vm-core = { package = "miden-core", path = "../core", version = "0.4", default-features = false }
winter-air = { package = "winter-air", version = "0.5", default-features = false }
vm-core = { package = "miden-core", path = "../core", version = "0.5", default-features = false }
winter-air = { package = "winter-air", version = "0.6", default-features = false }

[dev-dependencies]
criterion = "0.4"
proptest = "1.0"
rand-utils = { package = "winter-rand-utils", version = "0.5" }
proptest = "1.1"
rand-utils = { package = "winter-rand-utils", version = "0.6" }
2 changes: 1 addition & 1 deletion air/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,4 @@ If you'd like to learn more about AIR, the following blog posts from StarkWare a
* [StarkDEX Deep Dive: the STARK Core Engine](https://medium.com/starkware/starkdex-deep-dive-the-stark-core-engine-497942d0f0ab)

## License
This project is [MIT licensed](../LICENSE).
This project is [MIT licensed](../LICENSE).
53 changes: 26 additions & 27 deletions air/src/chiplets/hasher/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -22,15 +22,15 @@ pub const NUM_CONSTRAINTS: usize = 31;
/// The number of periodic columns which are used as selectors to specify a particular row or rows
/// within the hash cycle.
pub const NUM_PERIODIC_SELECTOR_COLUMNS: usize = 3;
/// The total number of periodic columns used by the hash processor, which is the sum of the number
/// The total number of periodic columns used by the hasher chiplet, which is the sum of the number
/// of periodic selector columns plus the columns of round constants for the Rescue Prime Optimized
/// hash permutation.
pub const NUM_PERIODIC_COLUMNS: usize = STATE_WIDTH * 2 + NUM_PERIODIC_SELECTOR_COLUMNS;

// PERIODIC COLUMNS
// ================================================================================================

/// Returns the set of periodic columns required by the Hash processor.
/// Returns the set of periodic columns required by the hasher chiplet.
///
/// The columns consist of:
/// - k0 column, which has a repeating pattern of 7 zeros followed by a single one.
Expand Down Expand Up @@ -107,60 +107,59 @@ pub fn get_transition_constraint_count() -> usize {
NUM_CONSTRAINTS
}

/// Enforces constraints for the hash chiplet.
/// Enforces constraints for the hasher chiplet.
///
/// - The `processor_flag` indicates whether the current row is in the section of the chiplets
/// module that contains this processor's trace.
/// - The `transition_flag` indicates whether or not the constraints should be enforced for this
/// transition. It is expected to be false when the next row will be the last row of this
/// processor's execution trace.
/// - The `hasher_flag` determines if the hasher chiplet is currently enabled. It should be
/// computed by the caller and set to `Felt::ONE`
/// - The `transition_flag` indicates whether this is the last row this chiplet's execution trace,
/// and therefore the constraints should not be enforced.
pub fn enforce_constraints<E: FieldElement<BaseField = Felt>>(
frame: &EvaluationFrame<E>,
periodic_values: &[E],
result: &mut [E],
processor_flag: E,
hasher_flag: E,
transition_flag: E,
) {
// Enforce that the row address increases by 1 at each step when the transition flag is set.
result.agg_constraint(
0,
processor_flag * transition_flag,
hasher_flag * transition_flag,
frame.row_next() - frame.row() - E::ONE,
);
let mut index = 1;

index += enforce_selectors(frame, periodic_values, &mut result[index..], processor_flag);
index += enforce_hasher_selectors(frame, periodic_values, &mut result[index..], hasher_flag);

index += enforce_node_index(frame, periodic_values, &mut result[index..], processor_flag);
index += enforce_node_index(frame, periodic_values, &mut result[index..], hasher_flag);

enforce_hasher_state(frame, periodic_values, &mut result[index..], processor_flag);
enforce_hasher_state(frame, periodic_values, &mut result[index..], hasher_flag);
}

// TRANSITION CONSTRAINT HELPERS
// ================================================================================================

/// Enforces that all selectors and selector transitions are valid.
/// Enforces validity of the internal selectors of the hasher chiplet.
///
/// - All selectors must contain binary values.
/// - s1 and s2 must be copied to the next row unless f_out is set in the current or next row.
/// - When a cycle ends by absorbing more elements or a Merkle path node, ensure the next value of
/// s0 is always zero. Otherwise, s0 should be unconstrained.
/// - Prevent an invalid combination of flags where s_0 = 0 and s_1 = 1.
fn enforce_selectors<E: FieldElement>(
fn enforce_hasher_selectors<E: FieldElement>(
frame: &EvaluationFrame<E>,
periodic_values: &[E],
result: &mut [E],
processor_flag: E,
hasher_flag: E,
) -> usize {
// Ensure the selectors are all binary values.
for (idx, result) in result.iter_mut().take(NUM_SELECTORS).enumerate() {
*result = processor_flag * is_binary(frame.s(idx));
*result = hasher_flag * is_binary(frame.s(idx));
}
let mut constraint_offset = NUM_SELECTORS;

// Ensure the values in s1 and s2 in the current row are copied to the next row when f_out != 1
// and f_out' != 1.
let copy_selectors_flag = processor_flag
let copy_selectors_flag = hasher_flag
* binary_not(frame.f_out(periodic_values))
* binary_not(frame.f_out_next(periodic_values));
result[constraint_offset] = copy_selectors_flag * (frame.s_next(1) - frame.s(1));
Expand All @@ -171,15 +170,15 @@ fn enforce_selectors<E: FieldElement>(

// s0 should be unconstrained except in the last row of the cycle if any of f_abp, f_mpa, f_mva,
// or f_mua are 1, in which case the next value of s0 must be zero.
result[constraint_offset] = processor_flag
result[constraint_offset] = hasher_flag
* periodic_values[0]
* frame.s_next(0)
* (frame.f_abp() + frame.f_mpa() + frame.f_mva() + frame.f_mua());
constraint_offset += 1;

// Prevent an invalid combinations of flags.
result[constraint_offset] =
processor_flag * periodic_values[0] * binary_not(frame.s(0)) * frame.s(1);
hasher_flag * periodic_values[0] * binary_not(frame.s(0)) * frame.s(1);
constraint_offset += 1;

constraint_offset
Expand All @@ -197,22 +196,22 @@ fn enforce_node_index<E: FieldElement>(
frame: &EvaluationFrame<E>,
periodic_values: &[E],
result: &mut [E],
processor_flag: E,
hasher_flag: E,
) -> usize {
let mut constraint_offset = 0;

// Enforce that the node index is 0 when a computation is finished.
result[constraint_offset] = processor_flag * frame.f_out(periodic_values) * frame.i();
result[constraint_offset] = hasher_flag * frame.f_out(periodic_values) * frame.i();
constraint_offset += 1;

// When a new node is being absorbed into the hasher state, ensure that the shift to the right
// was performed correctly by enforcing that the discarded bit is a binary value.
result[constraint_offset] = processor_flag * frame.f_an(periodic_values) * is_binary(frame.b());
result[constraint_offset] = hasher_flag * frame.f_an(periodic_values) * is_binary(frame.b());
constraint_offset += 1;

// When we are not absorbing a new row and the computation is not finished, make sure the value
// of i is copied to the next row.
result[constraint_offset] = processor_flag
result[constraint_offset] = hasher_flag
* (E::ONE - frame.f_an(periodic_values) - frame.f_out(periodic_values))
* (frame.i_next() - frame.i());
constraint_offset += 1;
Expand All @@ -233,13 +232,13 @@ fn enforce_hasher_state<E: FieldElement + From<Felt>>(
frame: &EvaluationFrame<E>,
periodic_values: &[E],
result: &mut [E],
processor_flag: E,
hasher_flag: E,
) -> usize {
let mut constraint_offset = 0;

// Get the constraint flags and the RPO round constants from the periodic values.
let hash_flag = processor_flag * binary_not(periodic_values[0]);
let last_row = processor_flag * periodic_values[0];
let hash_flag = hasher_flag * binary_not(periodic_values[0]);
let last_row = hasher_flag * periodic_values[0];
let ark = &periodic_values[NUM_PERIODIC_SELECTOR_COLUMNS..];

// Enforce the RPO round constraints.
Expand Down
2 changes: 1 addition & 1 deletion air/src/chiplets/hasher/tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ fn hash_round() {
// TEST HELPER FUNCTIONS
// ================================================================================================

/// Returns the result of Hash processor's constraint evaluations on the provided frame starting at
/// Returns the result of hasher chiplet's constraint evaluations on the provided frame starting at
/// the specified row.
fn get_constraint_evaluation(
frame: EvaluationFrame<Felt>,
Expand Down
Loading

0 comments on commit 4195475

Please sign in to comment.