Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

opt(torii-core): move off queryqueue for executing tx #2460

Merged
merged 55 commits into from
Oct 3, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
55 commits
Select commit Hold shift + click to select a range
01ce338
opt(torii-core): move off queryqueue for executing tx
Larkooo Sep 20, 2024
e0ec767
feat: replace queury queue by executor
Larkooo Sep 24, 2024
6097a60
fix: executor
Larkooo Sep 24, 2024
043f669
refactor: executor logic
Larkooo Sep 25, 2024
9d7d0e7
Merge remote-tracking branch 'upstream/main' into use-tx-executor
Larkooo Sep 25, 2024
9314438
fix: tests
Larkooo Sep 25, 2024
f9a136f
fmt
Larkooo Sep 25, 2024
b883343
executor inside of tokio select
Larkooo Sep 25, 2024
7771fdf
executor graceful exit
Larkooo Sep 25, 2024
60c9069
priv execute
Larkooo Sep 25, 2024
cd52f0f
contracts insertion shouldnt go through executor
Larkooo Sep 25, 2024
045eed0
clean code
Larkooo Sep 25, 2024
045e4ae
exec
Larkooo Sep 25, 2024
388ba1e
Merge branch 'main' into use-tx-executor
Larkooo Sep 25, 2024
b7acef5
fix: tests
Larkooo Sep 25, 2024
b94ad7a
oneshot channel for execution result
Larkooo Sep 25, 2024
c13ff59
fmt
Larkooo Sep 25, 2024
7fc27d5
clone shutdown tx
Larkooo Sep 25, 2024
260845c
fmt
Larkooo Sep 25, 2024
a7e4f1f
fix: test exec
Larkooo Sep 25, 2024
8cf4452
non bloking execute engine
Larkooo Sep 25, 2024
2bcf226
executor logs
Larkooo Sep 25, 2024
3242ac4
in memory head
Larkooo Sep 25, 2024
ef3e4ba
fmt
Larkooo Sep 25, 2024
299c0b9
fix: tests
Larkooo Sep 27, 2024
e4404f1
fixx: libp2p
Larkooo Sep 27, 2024
663234a
fmt
Larkooo Sep 27, 2024
c998428
Merge branch 'main' into use-tx-executor
Larkooo Sep 27, 2024
994abc5
try fix libp2p test
Larkooo Sep 27, 2024
65612fa
fix tests
Larkooo Sep 27, 2024
13b1ba7
fmt
Larkooo Sep 27, 2024
0c31327
use tempfile for tests
Larkooo Sep 27, 2024
afa2a0a
fix
Larkooo Sep 27, 2024
ef9fafc
c
Larkooo Sep 27, 2024
4ec379c
fix: sql tests
Larkooo Sep 27, 2024
d393896
clone
Larkooo Sep 27, 2024
1730bfc
fmt
Larkooo Sep 27, 2024
b708081
fmt
Larkooo Sep 27, 2024
7758cf9
no temp file
Larkooo Sep 27, 2024
607cd06
tmp file
Larkooo Sep 30, 2024
d48dd30
fix: lock issues
Larkooo Sep 30, 2024
6d4b99f
manuallyt use tmp file
Larkooo Sep 30, 2024
baf7f35
fix graphql tests
Larkooo Sep 30, 2024
c4f288a
fix: tests
Larkooo Sep 30, 2024
5dac220
clippy
Larkooo Sep 30, 2024
28633b4
fix torii bin
Larkooo Sep 30, 2024
fd3c377
engine executions
Larkooo Sep 30, 2024
4cabea5
use tmp file for db
Larkooo Sep 30, 2024
ee86042
fix: cursor
Larkooo Sep 30, 2024
43246b6
chore
Larkooo Sep 30, 2024
706c7fb
wip
Larkooo Oct 1, 2024
9c9e0a3
Merge branch 'main' into use-tx-executor
Larkooo Oct 1, 2024
6b6f5a6
cleaning code
Larkooo Oct 2, 2024
61f0a4b
refactor: handle errors without panic
Larkooo Oct 2, 2024
63cca75
use vec
Larkooo Oct 2, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions bin/torii/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,7 @@ tracing-subscriber.workspace = true
tracing.workspace = true
url.workspace = true
webbrowser = "0.8"
tempfile.workspace = true

[dev-dependencies]
camino.workspace = true
Expand Down
17 changes: 14 additions & 3 deletions bin/torii/src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -27,10 +27,12 @@
use starknet::core::types::Felt;
use starknet::providers::jsonrpc::HttpTransport;
use starknet::providers::JsonRpcClient;
use tempfile::NamedTempFile;
use tokio::sync::broadcast;
use tokio::sync::broadcast::Sender;
use tokio_stream::StreamExt;
use torii_core::engine::{Engine, EngineConfig, IndexingFlags, Processors};
use torii_core::executor::Executor;
use torii_core::processors::event_message::EventMessageProcessor;
use torii_core::processors::generate_event_processors_map;
use torii_core::processors::metadata_update::MetadataUpdateProcessor;
Expand Down Expand Up @@ -64,7 +66,7 @@

/// Database filepath (ex: indexer.db). If specified file doesn't exist, it will be
/// created. Defaults to in-memory database
#[arg(short, long, default_value = ":memory:")]
#[arg(short, long, default_value = "")]
database: String,

/// Specify a block to start indexing from, ignored if stored head exists
Expand Down Expand Up @@ -163,8 +165,12 @@
})
.expect("Error setting Ctrl-C handler");

let tempfile = NamedTempFile::new()?;
let database_path =
if args.database.is_empty() { tempfile.path().to_str().unwrap() } else { &args.database };

Check warning on line 171 in bin/torii/src/main.rs

View check run for this annotation

Codecov / codecov/patch

bin/torii/src/main.rs#L168-L171

Added lines #L168 - L171 were not covered by tests
Comment on lines +168 to +171
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ohayo, sensei! Consider more robust error handling for temporary file path.

The new logic for handling the database path is a good improvement. However, using unwrap() on tempfile.path().to_str() could potentially panic if the path contains non-UTF-8 characters. Consider using a more robust error handling approach.

Here's a suggestion to improve error handling:

let database_path = if args.database.is_empty() {
    tempfile.path().to_str().ok_or_else(|| anyhow::anyhow!("Failed to get temporary file path"))?
} else {
    &args.database
};

This change will propagate the error up the call stack instead of panicking, allowing for more graceful error handling.

let mut options =
SqliteConnectOptions::from_str(&args.database)?.create_if_missing(true).with_regexp();
SqliteConnectOptions::from_str(database_path)?.create_if_missing(true).with_regexp();

Check warning on line 173 in bin/torii/src/main.rs

View check run for this annotation

Codecov / codecov/patch

bin/torii/src/main.rs#L173

Added line #L173 was not covered by tests

// Performance settings
options = options.auto_vacuum(SqliteAutoVacuum::None);
Expand All @@ -185,7 +191,12 @@
// Get world address
let world = WorldContractReader::new(args.world_address, provider.clone());

let db = Sql::new(pool.clone(), args.world_address).await?;
let (mut executor, sender) = Executor::new(pool.clone(), shutdown_tx.clone()).await?;
tokio::spawn(async move {
executor.run().await.unwrap();
});

Comment on lines +194 to +198
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ohayo, sensei! Consider improving error handling in the executor task.

The creation and running of the Executor in a separate tokio task is a good approach for concurrent execution. However, using unwrap() in the spawned task could lead to a panic if an error occurs, which might cause the entire application to crash.

Consider handling potential errors more gracefully:

tokio::spawn(async move {
    if let Err(e) = executor.run().await {
        error!("Executor encountered an error: {:?}", e);
        // Optionally, you could send a shutdown signal here
        // let _ = shutdown_tx.send(());
    }
});

This change will log the error instead of panicking, allowing the application to continue running even if the executor encounters an issue.

let db = Sql::new(pool.clone(), args.world_address, sender.clone()).await?;

Check warning on line 199 in bin/torii/src/main.rs

View check run for this annotation

Codecov / codecov/patch

bin/torii/src/main.rs#L194-L199

Added lines #L194 - L199 were not covered by tests

let processors = Processors {
event: generate_event_processors_map(vec![
Expand Down
1 change: 1 addition & 0 deletions crates/torii/core/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -41,3 +41,4 @@ dojo-test-utils.workspace = true
dojo-utils.workspace = true
katana-runner.workspace = true
scarb.workspace = true
tempfile.workspace = true
95 changes: 49 additions & 46 deletions crates/torii/core/src/engine.rs
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
use anyhow::Result;
use bitflags::bitflags;
use dojo_world::contracts::world::WorldContractReader;
use futures_util::future::try_join_all;
use hashlink::LinkedHashMap;
use starknet::core::types::{
BlockId, BlockTag, EmittedEvent, Event, EventFilter, Felt, MaybePendingBlockWithReceipts,
Expand All @@ -17,7 +18,6 @@
use tokio::sync::broadcast::Sender;
use tokio::sync::mpsc::Sender as BoundedSender;
use tokio::sync::Semaphore;
use tokio::task::JoinSet;
use tokio::time::{sleep, Instant};
use tracing::{debug, error, info, trace, warn};

Expand Down Expand Up @@ -108,6 +108,13 @@
pub event: Event,
}

#[derive(Debug)]
pub struct EngineHead {
pub block_number: u64,
pub last_pending_block_world_tx: Option<Felt>,
pub last_pending_block_tx: Option<Felt>,
}

#[allow(missing_debug_implementations)]
pub struct Engine<P: Provider + Send + Sync + std::fmt::Debug + 'static> {
world: Arc<WorldContractReader<P>>,
Expand Down Expand Up @@ -151,7 +158,7 @@
// use the start block provided by user if head is 0
let (head, _, _) = self.db.head().await?;
if head == 0 {
self.db.set_head(self.config.start_block);
self.db.set_head(self.config.start_block)?;

Check warning on line 161 in crates/torii/core/src/engine.rs

View check run for this annotation

Codecov / codecov/patch

crates/torii/core/src/engine.rs#L161

Added line #L161 was not covered by tests
} else if self.config.start_block != 0 {
warn!(target: LOG_TARGET, "Start block ignored, stored head exists and will be used instead.");
}
Expand All @@ -164,6 +171,7 @@
let mut erroring_out = false;
loop {
let (head, last_pending_block_world_tx, last_pending_block_tx) = self.db.head().await?;

tokio::select! {
_ = shutdown_rx.recv() => {
break Ok(());
Expand All @@ -179,7 +187,7 @@
}

match self.process(fetch_result).await {
Ok(()) => {}
Ok(_) => self.db.execute().await?,
Err(e) => {
error!(target: LOG_TARGET, error = %e, "Processing fetched data.");
erroring_out = true;
Expand Down Expand Up @@ -363,21 +371,15 @@
}))
}

pub async fn process(&mut self, fetch_result: FetchDataResult) -> Result<()> {
pub async fn process(&mut self, fetch_result: FetchDataResult) -> Result<Option<EngineHead>> {

Check warning on line 374 in crates/torii/core/src/engine.rs

View check run for this annotation

Codecov / codecov/patch

crates/torii/core/src/engine.rs#L374

Added line #L374 was not covered by tests
match fetch_result {
FetchDataResult::Range(data) => {
self.process_range(data).await?;
}
FetchDataResult::Pending(data) => {
self.process_pending(data).await?;
}
FetchDataResult::None => {}
FetchDataResult::Range(data) => self.process_range(data).await.map(Some),
FetchDataResult::Pending(data) => self.process_pending(data).await.map(Some),
FetchDataResult::None => Ok(None),

Check warning on line 378 in crates/torii/core/src/engine.rs

View check run for this annotation

Codecov / codecov/patch

crates/torii/core/src/engine.rs#L376-L378

Added lines #L376 - L378 were not covered by tests
}

Ok(())
}

pub async fn process_pending(&mut self, data: FetchPendingResult) -> Result<()> {
pub async fn process_pending(&mut self, data: FetchPendingResult) -> Result<EngineHead> {

Check warning on line 382 in crates/torii/core/src/engine.rs

View check run for this annotation

Codecov / codecov/patch

crates/torii/core/src/engine.rs#L382

Added line #L382 was not covered by tests
// Skip transactions that have been processed already
// Our cursor is the last processed transaction

Expand Down Expand Up @@ -407,16 +409,19 @@
// provider. So we can fail silently and try
// again in the next iteration.
warn!(target: LOG_TARGET, transaction_hash = %format!("{:#x}", transaction_hash), "Retrieving pending transaction receipt.");
self.db.set_head(data.block_number - 1);
self.db.set_head(data.block_number - 1)?;

Check warning on line 412 in crates/torii/core/src/engine.rs

View check run for this annotation

Codecov / codecov/patch

crates/torii/core/src/engine.rs#L412

Added line #L412 was not covered by tests
if let Some(tx) = last_pending_block_tx {
self.db.set_last_pending_block_tx(Some(tx));
self.db.set_last_pending_block_tx(Some(tx))?;

Check warning on line 414 in crates/torii/core/src/engine.rs

View check run for this annotation

Codecov / codecov/patch

crates/torii/core/src/engine.rs#L414

Added line #L414 was not covered by tests
}

if let Some(tx) = last_pending_block_world_tx {
self.db.set_last_pending_block_world_tx(Some(tx));
self.db.set_last_pending_block_world_tx(Some(tx))?;

Check warning on line 418 in crates/torii/core/src/engine.rs

View check run for this annotation

Codecov / codecov/patch

crates/torii/core/src/engine.rs#L418

Added line #L418 was not covered by tests
}
self.db.execute().await?;
return Ok(());
return Ok(EngineHead {
block_number: data.block_number - 1,
last_pending_block_tx,
last_pending_block_world_tx,
});

Check warning on line 424 in crates/torii/core/src/engine.rs

View check run for this annotation

Codecov / codecov/patch

crates/torii/core/src/engine.rs#L420-L424

Added lines #L420 - L424 were not covered by tests
}
_ => {
error!(target: LOG_TARGET, error = %e, transaction_hash = %format!("{:#x}", transaction_hash), "Processing pending transaction.");
Expand All @@ -441,22 +446,24 @@

// Set the head to the last processed pending transaction
// Head block number should still be latest block number
self.db.set_head(data.block_number - 1);
self.db.set_head(data.block_number - 1)?;

Check warning on line 449 in crates/torii/core/src/engine.rs

View check run for this annotation

Codecov / codecov/patch

crates/torii/core/src/engine.rs#L449

Added line #L449 was not covered by tests

if let Some(tx) = last_pending_block_tx {
self.db.set_last_pending_block_tx(Some(tx));
self.db.set_last_pending_block_tx(Some(tx))?;

Check warning on line 452 in crates/torii/core/src/engine.rs

View check run for this annotation

Codecov / codecov/patch

crates/torii/core/src/engine.rs#L452

Added line #L452 was not covered by tests
}

if let Some(tx) = last_pending_block_world_tx {
self.db.set_last_pending_block_world_tx(Some(tx));
self.db.set_last_pending_block_world_tx(Some(tx))?;

Check warning on line 456 in crates/torii/core/src/engine.rs

View check run for this annotation

Codecov / codecov/patch

crates/torii/core/src/engine.rs#L456

Added line #L456 was not covered by tests
}

self.db.execute().await?;

Ok(())
Ok(EngineHead {
block_number: data.block_number - 1,
last_pending_block_world_tx,
last_pending_block_tx,
})

Check warning on line 463 in crates/torii/core/src/engine.rs

View check run for this annotation

Codecov / codecov/patch

crates/torii/core/src/engine.rs#L459-L463

Added lines #L459 - L463 were not covered by tests
}

pub async fn process_range(&mut self, data: FetchRangeResult) -> Result<()> {
pub async fn process_range(&mut self, data: FetchRangeResult) -> Result<EngineHead> {
// Process all transactions
let mut last_block = 0;
for ((block_number, transaction_hash), events) in data.transactions {
Expand Down Expand Up @@ -486,38 +493,36 @@
self.process_block(block_number, data.blocks[&block_number]).await?;
last_block = block_number;
}

if self.db.query_queue.queue.len() >= QUERY_QUEUE_BATCH_SIZE {
self.db.execute().await?;
}
}

// Process parallelized events
self.process_tasks().await?;

self.db.set_head(data.latest_block_number);
self.db.set_last_pending_block_world_tx(None);
self.db.set_last_pending_block_tx(None);
self.db.set_head(data.latest_block_number)?;
self.db.set_last_pending_block_world_tx(None)?;
self.db.set_last_pending_block_tx(None)?;

self.db.execute().await?;

Ok(())
Ok(EngineHead {
block_number: data.latest_block_number,
last_pending_block_tx: None,
last_pending_block_world_tx: None,
})
}

async fn process_tasks(&mut self) -> Result<()> {
// We use a semaphore to limit the number of concurrent tasks
let semaphore = Arc::new(Semaphore::new(self.config.max_concurrent_tasks));

// Run all tasks concurrently
let mut set = JoinSet::new();
let mut handles = Vec::new();
for (task_id, events) in self.tasks.drain() {
let db = self.db.clone();
let world = self.world.clone();
let processors = self.processors.clone();
let semaphore = semaphore.clone();

set.spawn(async move {
let _permit = semaphore.acquire().await.unwrap();
handles.push(tokio::spawn(async move {
let _permit = semaphore.acquire().await?;
let mut local_db = db.clone();
for ParallelizedEvent { event_id, event, block_number, block_timestamp } in events {
if let Some(processor) = processors.event.get(&event.keys[0]) {
Expand All @@ -531,15 +536,13 @@
}
}
}
Ok::<_, anyhow::Error>(local_db)
});

Ok::<_, anyhow::Error>(())
}));
}

// Join all tasks
while let Some(result) = set.join_next().await {
let local_db = result??;
self.db.merge(local_db)?;
}
try_join_all(handles).await?;

Ok(())
}
Expand Down Expand Up @@ -688,7 +691,7 @@
transaction_hash: Felt,
) -> Result<()> {
if self.config.flags.contains(IndexingFlags::RAW_EVENTS) {
self.db.store_event(event_id, event, transaction_hash, block_timestamp);
self.db.store_event(event_id, event, transaction_hash, block_timestamp)?;

Check warning on line 694 in crates/torii/core/src/engine.rs

View check run for this annotation

Codecov / codecov/patch

crates/torii/core/src/engine.rs#L694

Added line #L694 was not covered by tests
}

let event_key = event.keys[0];
Expand Down
Loading
Loading