Skip to content

Releases: ArweaveTeam/arweave

Release 2.7.4

01 Aug 01:36
Compare
Choose a tag to compare

Arweave 2.7.4 Release Notes

If you were previously running the 2.7.4 pre-release we recommend you update to this release. This release includes all changes from the pre-release, plus some additional fixes and features.

Mining Performance Improvements

This release includes a number of mining performance improvements, and is the first release for which we've seen a single-node miner successfully mine a full replica at almost the full expected hashrate (56 partitions mined at 95% efficiency at the time of the test). If your miner previously saw a loss of hashrate at higher partition counts despite low CPU utilization, it might be worth retesting.

Erlang VM arguments

Adjusting the arguments provided to the Erlang VM can sometimes improve mining hashrate. In particular we found that on some high-core count CPUs, restricting the number of threads available to Erlang actually improved performance. You'll want to test these options for yourself as behavior varies dramatically from system to system.

This release introduces a new command-line separator: --

All arguments before the -- separator are passed to the Erlang VM, all arguments after it are passed to Arweave. If the -- is omitted, all arguments are passed to Arweave.

For example, to restrict the number of threads available to Arweave to 24, you would build a command like:

./bin/start +S 24:24 -- <regular arweave command line flags>

Faster Node Shutdown

Unrelated to the above changes, this release includes a couple fixes that should reduce the time it takes for a node to shut down following the ./bin/stop command.

Solution recovery

This release includes several features and bug fixes intended to increase the chance that a valid solution results in a confirmed block.

Rebasing

When two or more miners post blocks at the same height, the block that is adopted by a majority of the network first will be added to the blockchain and the other blocks will be orphaned. Miners of orphaned blocks do not receive block rewards for those blocks.

This release introduce the ability for orphaned blocks to be rebased. If a miner detects that their block has been orphaned, but the block solution is still valid, the miner will take that solution and build a new block with it. When a block is rebased a rebasing_block message will be printed to the logs.

Last minute proof fetching

After finding a valid solution a miner goes through several steps as they build a block. One of those steps involves loading the selected chunk proofs from disk. Occasionally those proofs might be missing or corrupt. Prior to this release when that happened, the solution would be rejected and the miner would return to hashing. With this release the miner will reach out to several peers and request the missing proofs - if successful the miner can continue building and publishing the block.

last_step_checkpoints recovery

This release provides more robust logic for generating the last_step_checkpoints field in mined blocks. Prior to this release there were some scenarios where a miner would unnecessarily reject a solution due to missing last_step_checkpoints.

VDF Server Improvements

In addition to a number of VDF server/client bug fixes and performance improvements, this release includes two new VDF server configurations.

VDF Forwarding

You can now set up a node as a VDF forwarder. If a node specifies both the vdf_server_trusted_peer and vdf_client_peer flags it will receive its VDF from the specified VDF Servers and provide its VDF to the specified VDF clients. The push/pull behavior remains unchanged - any of the server/client relationships can be configured to push VDF updates or pull them

Public VDF

If a VDF server specifies the enable public_vdf_server flag it will provide VDF to any peer that requests it without needing to first whitelist that peer via the vdf_client_peer flag.

/recent endpoint

This release adds a new /recent endpoint which will return a list of recent forks that the node has detected, as well as the last 18 blocks they've received as well as the timestamps they received them.

Webhooks

This release adds additional webhook support. When webhooks are configured a node will POST data to a provided URL (aka webhook) when certain events are triggered.

Node webhooks can only be configured via a JSON config_file. For example:

{
  "webhooks": [
    {
      "events": ["transaction", "block"],
      "url": "https://example.com/block_or_tx",
      "headers": {
        "Authorization": "Bearer 123"
       }
    },
    {
      "events": ["transaction_data"],
      "url": "http://127.0.0.1:1985/tx_data"
    }
}

The supported events are:

  • transaction : POSTS
    • the transaction header whenever this node accepts and validates a new transaction
  • transaction_data : POSTS
    • { "event": "transaction_data_synced", "txid": <TXID> } once this node has received all the chunks belonging to the transaction TXID
    • { "event": "transaction_orphaned", "txid": <TXID> } when this node detects that TXID has been orphaned
    • { "event": "transaction_data_removed", "txid": <TXID> } when this node detects that at least one chunk has been removed from a previously synced transaction
  • block : POSTS
    • the block header whenever this node accepts and validates a new block

In all cases the POST payload is JSON-encoded

Benchmarking and data utilities

  • ./bin/benchmark-hash prints benchmark data on H0 and H1/H2 hashing performance
  • fix for ./bin/data-doctor bench - it should now be able to correctly report storage module read performance
  • data-doctor dump dumps all block headers and transactions

Miscellaneous Bug Fixes and additions

  1. Several coordinated mining and mining pool bug fixes
  2. /metrics was incorrect if mining address included a _
  3. Fix bug in start_from_block and start_from_latest_state
  4. Add CORS header to /metrics so it can be queried from an in-browser app
  5. Blacklist handling optimizations

Pre-Release 2.7.4

31 May 13:01
Compare
Choose a tag to compare
Pre-Release 2.7.4 Pre-release
Pre-release

This is a pre-release and has not gone through a full release validation, please install with that in mind

Note: In order to test the VDF client/server fixes please make sure to set your VDF server to vdf-server-4.arweave.xyz. We will keep vdf-server-3.arweave.xyz running an older version of the software (without the fixes) in case there are issues with this release.

Summary of changes in this release:

  • Fixes for several VDF client/server communication issues.
  • Fixes to some pool mining bugs
  • Solution rebasing to lower orphan rate
  • Last minute proof fetching if they can't be found locally
  • More support for webhooks
  • Performance improvments for syncing and blacklist processing

Release 2.7.3

25 Mar 15:09
Compare
Choose a tag to compare

Arweave 2.7.3 Release Notes

2.7.3 is a minor release containing:

Re-packing in place

You can now repack a storage module from one packing address to another without needing any extra storage space. The repacking happens "in-place" replacing the original data with the repacked data.

See the storage_module section in the arweave help ( ./bin/start help) for more information.

Packing bug fixes and performance improvements

This release contains several packing performance improvements and bug fixes.

Coordinated Mining performance improvement

This release implements an improvement in how nodes process H1 batches that they receive from their Coordinated Mining peers. As a result the cm_in_batch_timeout is no longer needed and has been deprecated.

Release 2.7.2

01 Mar 14:22
8a0bef6
Compare
Choose a tag to compare

This release introduces a hard fork that activates at height 1391330, approximately 2024-03-26 14:00 UTC.

Coordinated Mining

When coordinated mining is configured multiple nodes can cooperate to find mining solutions for the same mining address without the risk of losing reserved rewards and blacklisting of the mining address. Without coordinated mining if two nodes publish blocks at the same height and with the same mining address, they may lose their reserved rewards and have their mining address blacklisted (See the Mining Guide for more information). This allows multiple nodes which each store a disjoint subset of the weave to reap the hashrate benefits of more two-chunk solutions.

Basic System

In a coordinated mining cluster there are 2 roles:

  1. Exit Node
  2. Miners

All nodes in the cluster share the same mining address. Each Miner generates H1 hashes for the partitions they store. Occasionally they will need an H2 for a packed partition they don't store. In this case, they can find another Miner in the coordinated mining cluster who does store the required partition packed with the required address, send them the H1 and ask them to calculate the H2. When a valid solution is found (either one- or two-chunk) the solution is sent to the Exit Node. Since the Exit Node is the only node in the coordinated mining cluster which publishes blocks, there's no risk of slashing. This point can be further enforced by ensuring only the Exit Node stores the mining address private key (and therefore only the Exit Node can sign blocks for that mining address)

Every node in the coordinated mining cluster is free to peer with any other nodes on the network as normal.

Single-Miner One Chunk Flow

Screenshot 2024-03-01 at 9 41 36 AM

Note: The single-miner two chunk flow (where Miner1 stores both the H1 and H2 partitions) is very similar

Coordinated Two Chunk Flow

Screenshot 2024-03-01 at 9 42 33 AM

Configuration

  1. All nodes in the Coordinated Mining cluster must specify the coordinated_mining parameter
  2. All nodes in the Coordinated Mining cluster must specify the same secret via the cm_api_secret parameter. A secret can be a string of any length.
  3. All miners in the Coordinated Mining cluster should identify all other miners in the cluster using the cm_peer multi-use parameter.
    • Note: an exit node can also optionally mine, in which case it is also considered a miner and should be identified by the cm_peer parameter
  4. All miners (excluding the exit node) should identify the exit node via the cm_exit_peer parameter.
    • Note: the exit node should not include the cm_exit_peer parameter
  5. All miners in the Coordinated Mining cluster can be configured as normal but they should all specify the same mining_addr.

There is one additional parameter which can be used to tune performance:

  • cm_out_batch_timeout: The frequency in milliseconds of sending other nodes in the coordinated mining setup a batch of H1 values to hash. A higher value reduces network traffic, a lower value reduces hashing latency. Default is 20.

Native Support for Pooled Mining

The Arweave node now has built-in support for pooled mining.

New configuration parameters (see arweave node help for descriptions)::

  • is_pool_server
  • is_pool_client
  • pool_api_key
  • pool_server_address

Mining Performance Improvements

Implemented several optimizations and bug fixes to enable more miners to achieve their maximal hashrate - particularly at higher partition counts.

A summary of changes:

  • Increase the degree of horizontal distribution used by the mining processes to remove performance bottlenecks at higher partition counts
  • Optimize the erlang VM memory allocation, management, and garbage collection
  • Fix several out of memory errors that could occur at higher partition counts
  • Fix a bug which could cause valid chunks to be discarded before being hashed

Updated Mining Performance Report:

=========================================== Mining Performance Report ============================================

VDF Speed:  3.00 s
H1 Solutions:     0
H2 Solutions:     3
Confirmed Blocks: 0

Local mining stats:
+-----------+-----------+----------+-------------+-------------+---------------+------------+------------+--------------+
| Partition | Data Size | % of Max |  Read (Cur) |  Read (Avg) |  Read (Ideal) | Hash (Cur) | Hash (Avg) | Hash (Ideal) |
+-----------+-----------+----------+-------------+-------------+---------------+------------+------------+--------------+
|     Total |   2.0 TiB |      5 % |   1.3 MiB/s |   1.3 MiB/s |    21.2 MiB/s |      5 h/s |      5 h/s |       84 h/s |
|         1 |   1.2 TiB |     34 % |   0.8 MiB/s |   0.8 MiB/s |    12.4 MiB/s |      3 h/s |      3 h/s |       49 h/s |
|         2 |   0.8 TiB |     25 % |   0.5 MiB/s |   0.5 MiB/s |     8.8 MiB/s |      2 h/s |      2 h/s |       35 h/s |
|         3 |   0.0 TiB |      0 % |   0.0 MiB/s |   0.0 MiB/s |     0.0 MiB/s |      0 h/s |      0 h/s |        0 h/s |
+-----------+-----------+----------+-----------+---------------+---------------+------------+------------+--------------+

(All values are reset when a node launches)

  • H1 Solutions / H2 Solutions display the number of each solution type discovered
  • Confirmed Blocks displays the number of blocks that were mined by this node and accepted by the network
  • Cur values refer to the most recent value (e.g. the average over the last ~10seconds)
  • Avg values refer to the all-time running average
  • Ideal refers to the optimal rate given the VDF speed and amount of data currently packed
    % of Max refers to how much of the given partition - or whole weave - is packed

Protocol Changes

The 2.7.2 Hard Fork is scheduled for block 1391330 (or roughly 2024-03-26 14:00 UTC), at which time the following protocol changes will activate:

  • The difficulty of a 1-chunk solution increases by 100x to better incentivize full-weave replicas
  • An additional pricing transition phase is scheduled to start November, 2024
  • A pricing cap of 340 Winston per GiB/minute is implemented until the November pricing transition
  • The checkpoint depth is reduced from 50 blocks to 18
  • Unnecessary poa2 chunks are rejected early to prevent a low impact spam attack. Even in the worst case this attack would add minimal bloat to the blockchain and thus wasn't a practical exploit. Closing the vector as a matter of good hygiene.

Additional Bug Fixes and Improvements

  • Enable Randomx support for OSX and arm/aarch64
  • Simplified TLS protocol support
    • See new configuration parameters tls_cert_file and tls_key_file to configure TLS
  • Add several more prometheus metrics:
    • debug-only metrics to track memory performance and processor utilization
    • mining performance metrics
    • coordinated mining metrics
    • metrics to track network characteristics (e.g. partitions covered in blocks, current/scheduled price, chunks per block)
  • Introduce a bin/data-doctor utility
    • data-doctor merge can merge multiple storage modules into 1
    • data-doctor bench runs a series of read rate benchmarks
  • Introduce a new bin/benchmark-packing utility to benchmark a node's packing peformance
    • The utility will generate input files if necessary and will process as close to 1GiB of data as possible while still allowing each core to process the same number of whole chunks.
    • Results are written to a csv and printed to console

Release 2.7.1

20 Nov 19:03
Compare
Choose a tag to compare

This release introduces a hard fork that activates at height 1316410, approximately 2023-12-05 14:00 UTC.

Note if you are running your own VDF Servers, update the server nodes first, then the client nodes.

Bug fixes

Address Occasional Block Validation Failures on VDF Clients

This release fixes an error that would occasionally cause VDF Clients to fail to validate valid blocks. This could occur following a VDF Difficulty Retarget if the VDF client had cached a stale VDF session with steps computed at the prior difficulty. With this change VDF sessions are refreshed whenever the difficulty retargets.

Stabilize VDF Difficulty Oscillation

This release fixes an error that caused unnecessary oscillation when retargeting VDF difficulty. With this patch the VDF difficulty will adjust smoothly towards a difficulty that will yield a network average VDF speed of 1 second.

Ensure VDF Clients Process Updates from All Configured VDF Servers

This release makes an update to the VDF Client code so that it processes all updates from all configured VDF Servers. Prior to this change a VDF Client would only switch VDF Servers when the active server became non-responsive - this could cause a VDF Client to get "stuck" on one VDF Server even if an alternate server provided better data.

Delay the pricing transition

This release introduces a patch that adds to the transition period before the activation of Arweave 2.6’s trustless price oracle, in order to give miners additional time to on-board packed data to the network. The release delays the onset of the transition window to roughly February 20, 2024


The release comes with the prebuilt binaries for the Linux x86_64 platforms.

If you want to run the miner from the existing Git folder, execute the following command:
git fetch --all --tags && git checkout -f N.2.7.1

See the Mining Guide for further instructions.

If you have any issues upgrading or would like to know more about the release, feel free to reach out to us in the Arweave Miners Discord (https://discord.gg/GHB4fxVv8B) or email us at [email protected].

Release 2.7.0

20 Sep 20:16
Compare
Choose a tag to compare

This release introduces a hard fork that activates at height 1275480, approximately 2023-10-05 07:00 UTC.

New features

Flexible Merkle Tree Combinations

When combining different data transactions, the merkle trees for each data root can be added to the larger merkle tree without being rebuilt or modified. This makes it easier, quicker, and less CPU-intensive to combine together multiple data transactions.

Documentation on Merkle Tree Rebasing: https://github.com/ArweaveTeam/examples/blob/main/rebased_merkle_tree/README.md
Example Code: https://github.com/ArweaveTeam/examples/blob/main/rebased_merkle_tree/rebased_merkle_tree.js

VDF Retargeting

The average VDF speed across the network is now tracked and used to increase or decrease the VDF difficulty so as to maintain a roughly 1-second VDF time across the network.

Bug fixes and other updates

Delay the pricing transition

This release introduces a patch that adds to the transition period before the activation of Arweave 2.6’s trustless price oracle, in order to give miners additional time to on-board packed data to the network. The release delays the onset of the transition window to roughly Dec. 14, 2023.

Memory optimization when mining

This change allows the mining server to periodically reclaim memory. Previously when a miner was configured with a suitably high mining_server_chunk_cache_size_limit (e.g. 5,000-7,000 per installed GB of RAM) memory usage would creep up, sometimes causing an out of memory error. With this change, that memory usage can be periodically reclaimed, delaying or eliminating the OOM error. Further performance and memory improvements are planned in the next release.

Start form local state

Introduce the start_from_latest_state and start_from_block configuration options allowing a miner to be launched from their local state rather than downloading the initialization data from peers. Most useful when bootstrapping a testnet.

Ensure genesis transaction data is served via the /tx endpoint

Fix for issue #455


The release comes with the prebuilt binaries for the Linux x86_64 platforms.

If you want to run the miner from the existing Git folder, execute the following command:
git fetch --all --tags && git checkout -f N.2.7.0

See the Mining Guide for further instructions.

If you have any issues upgrading or would like to know more about the release, feel free to reach out to us in the Arweave Miners Discord (https://discord.gg/GHB4fxVv8B) or email us at [email protected].

Release 2.6.10

15 Jun 16:20
652275e
Compare
Choose a tag to compare

The release introduces a few improvements, bug fixes, and one new endpoint.

  • Fix two memory issues that occasionally cause out-of-memory exceptions:
    • When running a VDF server with a slow VDF client, the memory footprint of the VDF server would gradually increase until all memory was consumed;
    • When syncing weave data the memory use of a node would spike when copying data locally between neighboring partitions, occasionally triggering an out-of-memory exception
  • implement the GET /total_supply endpoint to return the sum of all the existing accounts in the latest state, in Winston;
  • several performance improvements to the weave sync process;
  • remove the following metrics from the /metrics endpoint (together accounting for several thousand individual metrics):
    • erlang_vm_msacc_XXX
    • erlang_vm_allocators
    • erlang_vm_dist_XXX

The release comes with the prebuilt binaries for the Linux x86_64 platforms.

If you want to run the miner from the existing Git folder, execute the following command:

git fetch --all --tags && git checkout -f N.2.6.10

See the mining guide for further instructions.

If you have any issues upgrading or would like to know more about the release, feel free to reach out to us in the Arweave Miners Discord (https://discord.gg/GHB4fxVv8B) or email us at [email protected].

Release 2.6.9

01 Jun 01:39
75336d2
Compare
Choose a tag to compare

The release introduces a few improvements and bug fixes.

  • Improve syncing speed and stability significantly;
  • fix the issue where the node connected to a VDF server would occasionally lag behind;
  • add support for the VDF server pull interface, removing the requirement of a static IP when using a VDF server; to enable it, run your client with enable vdf_server_pull;
  • improve the mining performance of the nodes connected to the VDF server;
  • fix the bug introduced in 2.6.4 where two-chunk solutions with the chunks coming from different partitions would be dropped;
  • disable the server-side packing/unpacking of chunks by default (used to be enabled but very strictly limited); enable with enable pack_served_chunks;
  • add the GET /inflation/{height} endpoint returning the inflation reward for the given height;
  • reduce peak memory footprint during node initialization, and baseline memory footprint while syncing.

Note if you are running your own VDF servers, update the server nodes first, then the client nodes.

The release comes with the prebuilt binaries for the Linux x86_64 platforms.

If you want to run the miner from the existing Git folder, execute the following command:

git fetch --all --tags && git checkout -f N.2.6.9

See the mining guide for further instructions.

If you have any issues upgrading or would like to know more about the release, feel free to reach out to us in the Arweave Miners Discord (https://discord.gg/GHB4fxVv8B) or email us at [email protected].

Release 2.6.8

27 May 18:43
7d89b53
Compare
Choose a tag to compare

This release introduces a patch that adds to the transition period before the activation of Arweave 2.6’s trustless price oracle, in order to give miners additional time to on-board packed data to the network. The release delays the onset of the transition window by 4 months, and extends the interpolation between of old and new pricing systems to 18 months, from 12. This release introduces a hard fork that activates at height 1,189,560, approximately 2023-05-30 16:00 UTC.

Please note that the activation date for this patch is May 30th, as the present version has a real but small effect on end-user storage pricing. You will need to make sure you have upgraded your miner before this time to connect to the network.

Release 2.6.7.1

25 Apr 13:31
Compare
Choose a tag to compare
  • Fix a regression introduced by 2.6.7 where packed chunks were not padded correctly;
  • tweak the data discovery and syncing a bit.

The release comes with the prebuilt binaries for the Linux x86_64 platforms.

If you want to run the miner from the existing Git folder, execute the following command:

git fetch --all --tags && git checkout -f N.2.6.7.1

See the mining guide for further instructions.

If you have any issues upgrading or would like to know more about the release, feel free to reach out to us in the Arweave Miners Discord (https://discord.gg/GHB4fxVv8B) or email us at [email protected].