Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

opt(torii-core): move off queryqueue for executing tx #2460

Merged
merged 55 commits into from
Oct 3, 2024
Merged
Show file tree
Hide file tree
Changes from 11 commits
Commits
Show all changes
55 commits
Select commit Hold shift + click to select a range
01ce338
opt(torii-core): move off queryqueue for executing tx
Larkooo Sep 20, 2024
e0ec767
feat: replace queury queue by executor
Larkooo Sep 24, 2024
6097a60
fix: executor
Larkooo Sep 24, 2024
043f669
refactor: executor logic
Larkooo Sep 25, 2024
9d7d0e7
Merge remote-tracking branch 'upstream/main' into use-tx-executor
Larkooo Sep 25, 2024
9314438
fix: tests
Larkooo Sep 25, 2024
f9a136f
fmt
Larkooo Sep 25, 2024
b883343
executor inside of tokio select
Larkooo Sep 25, 2024
7771fdf
executor graceful exit
Larkooo Sep 25, 2024
60c9069
priv execute
Larkooo Sep 25, 2024
cd52f0f
contracts insertion shouldnt go through executor
Larkooo Sep 25, 2024
045eed0
clean code
Larkooo Sep 25, 2024
045e4ae
exec
Larkooo Sep 25, 2024
388ba1e
Merge branch 'main' into use-tx-executor
Larkooo Sep 25, 2024
b7acef5
fix: tests
Larkooo Sep 25, 2024
b94ad7a
oneshot channel for execution result
Larkooo Sep 25, 2024
c13ff59
fmt
Larkooo Sep 25, 2024
7fc27d5
clone shutdown tx
Larkooo Sep 25, 2024
260845c
fmt
Larkooo Sep 25, 2024
a7e4f1f
fix: test exec
Larkooo Sep 25, 2024
8cf4452
non bloking execute engine
Larkooo Sep 25, 2024
2bcf226
executor logs
Larkooo Sep 25, 2024
3242ac4
in memory head
Larkooo Sep 25, 2024
ef3e4ba
fmt
Larkooo Sep 25, 2024
299c0b9
fix: tests
Larkooo Sep 27, 2024
e4404f1
fixx: libp2p
Larkooo Sep 27, 2024
663234a
fmt
Larkooo Sep 27, 2024
c998428
Merge branch 'main' into use-tx-executor
Larkooo Sep 27, 2024
994abc5
try fix libp2p test
Larkooo Sep 27, 2024
65612fa
fix tests
Larkooo Sep 27, 2024
13b1ba7
fmt
Larkooo Sep 27, 2024
0c31327
use tempfile for tests
Larkooo Sep 27, 2024
afa2a0a
fix
Larkooo Sep 27, 2024
ef9fafc
c
Larkooo Sep 27, 2024
4ec379c
fix: sql tests
Larkooo Sep 27, 2024
d393896
clone
Larkooo Sep 27, 2024
1730bfc
fmt
Larkooo Sep 27, 2024
b708081
fmt
Larkooo Sep 27, 2024
7758cf9
no temp file
Larkooo Sep 27, 2024
607cd06
tmp file
Larkooo Sep 30, 2024
d48dd30
fix: lock issues
Larkooo Sep 30, 2024
6d4b99f
manuallyt use tmp file
Larkooo Sep 30, 2024
baf7f35
fix graphql tests
Larkooo Sep 30, 2024
c4f288a
fix: tests
Larkooo Sep 30, 2024
5dac220
clippy
Larkooo Sep 30, 2024
28633b4
fix torii bin
Larkooo Sep 30, 2024
fd3c377
engine executions
Larkooo Sep 30, 2024
4cabea5
use tmp file for db
Larkooo Sep 30, 2024
ee86042
fix: cursor
Larkooo Sep 30, 2024
43246b6
chore
Larkooo Sep 30, 2024
706c7fb
wip
Larkooo Oct 1, 2024
9c9e0a3
Merge branch 'main' into use-tx-executor
Larkooo Oct 1, 2024
6b6f5a6
cleaning code
Larkooo Oct 2, 2024
61f0a4b
refactor: handle errors without panic
Larkooo Oct 2, 2024
63cca75
use vec
Larkooo Oct 2, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion bin/torii/src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ use tokio::sync::broadcast;
use tokio::sync::broadcast::Sender;
use tokio_stream::StreamExt;
use torii_core::engine::{Engine, EngineConfig, IndexingFlags, Processors};
use torii_core::executor::Executor;
use torii_core::processors::event_message::EventMessageProcessor;
use torii_core::processors::generate_event_processors_map;
use torii_core::processors::metadata_update::MetadataUpdateProcessor;
Expand Down Expand Up @@ -185,7 +186,8 @@ async fn main() -> anyhow::Result<()> {
// Get world address
let world = WorldContractReader::new(args.world_address, provider.clone());

let db = Sql::new(pool.clone(), args.world_address).await?;
let (mut executor, sender) = Executor::new(pool.clone(), shutdown_tx.clone()).await?;
let db = Sql::new(pool.clone(), args.world_address, sender.clone()).await?;

let processors = Processors {
event: generate_event_processors_map(vec![
Expand Down Expand Up @@ -289,6 +291,7 @@ async fn main() -> anyhow::Result<()> {

tokio::select! {
res = engine.start() => res?,
_ = executor.run() => {},
_ = proxy_server.start(shutdown_tx.subscribe()) => {},
_ = graphql_server => {},
_ = grpc_server => {},
Expand Down
38 changes: 13 additions & 25 deletions crates/torii/core/src/engine.rs
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ impl<P: Provider + Send + Sync + std::fmt::Debug + 'static> Engine<P> {
// use the start block provided by user if head is 0
let (head, _, _) = self.db.head().await?;
if head == 0 {
self.db.set_head(self.config.start_block);
self.db.set_head(self.config.start_block)?;
} else if self.config.start_block != 0 {
warn!(target: LOG_TARGET, "Start block ignored, stored head exists and will be used instead.");
}
Expand Down Expand Up @@ -179,7 +179,7 @@ impl<P: Provider + Send + Sync + std::fmt::Debug + 'static> Engine<P> {
}

match self.process(fetch_result).await {
Ok(()) => {}
Ok(()) => self.db.execute()?,
Err(e) => {
error!(target: LOG_TARGET, error = %e, "Processing fetched data.");
erroring_out = true;
Expand Down Expand Up @@ -407,15 +407,14 @@ impl<P: Provider + Send + Sync + std::fmt::Debug + 'static> Engine<P> {
// provider. So we can fail silently and try
// again in the next iteration.
warn!(target: LOG_TARGET, transaction_hash = %format!("{:#x}", transaction_hash), "Retrieving pending transaction receipt.");
self.db.set_head(data.block_number - 1);
self.db.set_head(data.block_number - 1)?;
if let Some(tx) = last_pending_block_tx {
self.db.set_last_pending_block_tx(Some(tx));
self.db.set_last_pending_block_tx(Some(tx))?;
}

if let Some(tx) = last_pending_block_world_tx {
self.db.set_last_pending_block_world_tx(Some(tx));
self.db.set_last_pending_block_world_tx(Some(tx))?;
}
self.db.execute().await?;
return Ok(());
}
_ => {
Expand All @@ -441,18 +440,16 @@ impl<P: Provider + Send + Sync + std::fmt::Debug + 'static> Engine<P> {

// Set the head to the last processed pending transaction
// Head block number should still be latest block number
self.db.set_head(data.block_number - 1);
self.db.set_head(data.block_number - 1)?;

if let Some(tx) = last_pending_block_tx {
self.db.set_last_pending_block_tx(Some(tx));
self.db.set_last_pending_block_tx(Some(tx))?;
}

if let Some(tx) = last_pending_block_world_tx {
self.db.set_last_pending_block_world_tx(Some(tx));
self.db.set_last_pending_block_world_tx(Some(tx))?;
}

self.db.execute().await?;

Ok(())
}

Expand Down Expand Up @@ -486,20 +483,14 @@ impl<P: Provider + Send + Sync + std::fmt::Debug + 'static> Engine<P> {
self.process_block(block_number, data.blocks[&block_number]).await?;
last_block = block_number;
}

if self.db.query_queue.queue.len() >= QUERY_QUEUE_BATCH_SIZE {
self.db.execute().await?;
}
}

// Process parallelized events
self.process_tasks().await?;

self.db.set_head(data.latest_block_number);
self.db.set_last_pending_block_world_tx(None);
self.db.set_last_pending_block_tx(None);

self.db.execute().await?;
self.db.set_head(data.latest_block_number)?;
self.db.set_last_pending_block_world_tx(None)?;
self.db.set_last_pending_block_tx(None)?;

Ok(())
}
Expand Down Expand Up @@ -536,10 +527,7 @@ impl<P: Provider + Send + Sync + std::fmt::Debug + 'static> Engine<P> {
}

// Join all tasks
while let Some(result) = set.join_next().await {
let local_db = result??;
self.db.merge(local_db)?;
}
while let Some(_) = set.join_next().await {}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Sensei, handle errors from joined tasks in the JoinSet

The loop while let Some(_) = set.join_next().await {} ignores the results of the tasks. By not handling the Result from join_next(), any errors occurring in the spawned tasks may be missed. Consider capturing and handling these results to ensure that any task failures are properly addressed.


Ok(())
}
Expand Down Expand Up @@ -688,7 +676,7 @@ impl<P: Provider + Send + Sync + std::fmt::Debug + 'static> Engine<P> {
transaction_hash: Felt,
) -> Result<()> {
if self.config.flags.contains(IndexingFlags::RAW_EVENTS) {
self.db.store_event(event_id, event, transaction_hash, block_timestamp);
self.db.store_event(event_id, event, transaction_hash, block_timestamp)?;
}

let event_key = event.keys[0];
Expand Down
223 changes: 223 additions & 0 deletions crates/torii/core/src/executor.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,223 @@
use std::collections::VecDeque;
use std::mem;

use anyhow::{Context, Result};
use dojo_types::schema::{Struct, Ty};
use sqlx::query::Query;
use sqlx::sqlite::SqliteArguments;
use sqlx::{FromRow, Pool, Sqlite, Transaction};
use starknet::core::types::Felt;
use tokio::sync::broadcast::{Receiver, Sender};
use tokio::sync::mpsc::{unbounded_channel, UnboundedReceiver, UnboundedSender};

use crate::simple_broker::SimpleBroker;
use crate::types::{
Entity as EntityUpdated, Event as EventEmitted, EventMessage as EventMessageUpdated,
Model as ModelRegistered,
};

#[derive(Debug, Clone)]
pub enum Argument {
Null,
Int(i64),
Bool(bool),
String(String),
FieldElement(Felt),
}

#[derive(Debug, Clone)]
pub enum BrokerMessage {
ModelRegistered(ModelRegistered),
EntityUpdated(EntityUpdated),
EventMessageUpdated(EventMessageUpdated),
EventEmitted(EventEmitted),
}

#[derive(Debug, Clone)]
pub struct DeleteEntityQuery {
pub entity_id: String,
pub event_id: String,
pub block_timestamp: String,
pub ty: Ty,
}

#[derive(Debug, Clone)]
pub enum QueryType {
SetEntity(Ty),
DeleteEntity(DeleteEntityQuery),
EventMessage(Ty),
RegisterModel,
StoreEvent,
Execute,
Other,
}

pub struct Executor<'c> {
pool: Pool<Sqlite>,
transaction: Transaction<'c, Sqlite>,
publish_queue: VecDeque<BrokerMessage>,
rx: UnboundedReceiver<QueryMessage>,
shutdown_rx: Receiver<()>,
}

pub struct QueryMessage {
pub statement: String,
pub arguments: Vec<Argument>,
pub query_type: QueryType,
}

impl<'c> Executor<'c> {
pub async fn new(
pool: Pool<Sqlite>,
shutdown_tx: Sender<()>,
) -> Result<(Self, UnboundedSender<QueryMessage>)> {
let (tx, rx) = unbounded_channel();
lambda-0x marked this conversation as resolved.
Show resolved Hide resolved
let transaction = pool.begin().await?;
let publish_queue = VecDeque::new();
let shutdown_rx = shutdown_tx.subscribe();

Ok((Executor { pool, transaction, publish_queue, rx, shutdown_rx }, tx))
}

pub async fn run(&mut self) -> Result<()> {
loop {
tokio::select! {
_ = self.shutdown_rx.recv() => {
break Ok(());
}
Some(msg) = self.rx.recv() => {
let QueryMessage { statement, arguments, query_type } = msg;
let mut query = sqlx::query(&statement);

for arg in &arguments {
query = match arg {
Argument::Null => query.bind(None::<String>),
Argument::Int(integer) => query.bind(integer),
Argument::Bool(bool) => query.bind(bool),
Argument::String(string) => query.bind(string),
Argument::FieldElement(felt) => query.bind(format!("{:#x}", felt)),
}
}

self.handle_query_type(query, query_type, &statement, &arguments).await?;
}
}
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Consider improving error handling in the transaction lifecycle.

Ohayo, sensei! While the overall implementation is solid, there's room for improvement in error handling, particularly in the transaction lifecycle. Currently, if an error occurs during query execution, the transaction may remain open, potentially leading to database inconsistencies.

Consider wrapping the main loop in a try block and adding error handling to ensure the transaction is properly managed in case of failures:

pub async fn run(&mut self) -> Result<()> {
    loop {
        match self.process_messages().await {
            Ok(should_break) => {
                if should_break {
                    break;
                }
            }
            Err(e) => {
                // Rollback the transaction on error
                self.transaction.rollback().await?;
                // Start a new transaction for the next iteration
                self.transaction = self.pool.begin().await?;
                // Log the error or handle it as appropriate
                eprintln!("Error processing messages: {:?}", e);
            }
        }
    }
    Ok(())
}

async fn process_messages(&mut self) -> Result<bool> {
    tokio::select! {
        _ = self.shutdown_rx.recv() => {
            return Ok(true);
        }
        Some(msg) = self.rx.recv() => {
            let QueryMessage { statement, arguments, query_type } = msg;
            // ... (rest of the existing code)
            self.handle_query_type(query, query_type, &statement, &arguments).await?;
        }
    }
    Ok(false)
}

This approach ensures that the transaction is rolled back in case of errors, maintaining database consistency.


async fn handle_query_type<'a>(
&mut self,
query: Query<'a, Sqlite, SqliteArguments<'a>>,
query_type: QueryType,
statement: &str,
arguments: &[Argument],
) -> Result<()> {
let tx = &mut self.transaction;

match query_type {
QueryType::SetEntity(entity) => {
let row = query.fetch_one(&mut **tx).await.with_context(|| {
format!("Failed to execute query: {:?}, args: {:?}", statement, arguments)
})?;
let mut entity_updated = EntityUpdated::from_row(&row)?;
entity_updated.updated_model = Some(entity);
entity_updated.deleted = false;
let broker_message = BrokerMessage::EntityUpdated(entity_updated);
self.publish_queue.push_back(broker_message);
}
QueryType::DeleteEntity(entity) => {
let delete_model = query.execute(&mut **tx).await.with_context(|| {
format!("Failed to execute query: {:?}, args: {:?}", statement, arguments)
})?;
if delete_model.rows_affected() == 0 {
return Ok(());
}

let row = sqlx::query(
"UPDATE entities SET updated_at=CURRENT_TIMESTAMP, executed_at=?, event_id=? \
WHERE id = ? RETURNING *",
)
.bind(entity.block_timestamp)
.bind(entity.event_id)
.bind(entity.entity_id)
.fetch_one(&mut **tx)
.await?;
let mut entity_updated = EntityUpdated::from_row(&row)?;
entity_updated.updated_model =
Some(Ty::Struct(Struct { name: entity.ty.name(), children: vec![] }));

let count = sqlx::query_scalar::<_, i64>(
"SELECT count(*) FROM entity_model WHERE entity_id = ?",
)
.bind(entity_updated.id.clone())
.fetch_one(&mut **tx)
.await?;

// Delete entity if all of its models are deleted
if count == 0 {
sqlx::query("DELETE FROM entities WHERE id = ?")
.bind(entity_updated.id.clone())
.execute(&mut **tx)
.await?;
entity_updated.deleted = true;
}

let broker_message = BrokerMessage::EntityUpdated(entity_updated);
self.publish_queue.push_back(broker_message);
}
QueryType::RegisterModel => {
let row = query.fetch_one(&mut **tx).await.with_context(|| {
format!("Failed to execute query: {:?}, args: {:?}", statement, arguments)
})?;
let model_registered = ModelRegistered::from_row(&row)?;
self.publish_queue.push_back(BrokerMessage::ModelRegistered(model_registered));
}
QueryType::EventMessage(entity) => {
let row = query.fetch_one(&mut **tx).await.with_context(|| {
format!("Failed to execute query: {:?}, args: {:?}", statement, arguments)
})?;
let mut event_message = EventMessageUpdated::from_row(&row)?;
event_message.updated_model = Some(entity);
let broker_message = BrokerMessage::EventMessageUpdated(event_message);
self.publish_queue.push_back(broker_message);
}
QueryType::StoreEvent => {
let row = query.fetch_one(&mut **tx).await.with_context(|| {
format!("Failed to execute query: {:?}, args: {:?}", statement, arguments)
})?;
let event = EventEmitted::from_row(&row)?;
self.publish_queue.push_back(BrokerMessage::EventEmitted(event));
}
QueryType::Execute => {
self.execute().await?;
}
QueryType::Other => {
query.execute(&mut **tx).await.with_context(|| {
format!("Failed to execute query: {:?}, args: {:?}", statement, arguments)
})?;
}
}

Ok(())
}

async fn execute(&mut self) -> Result<()> {
let transaction = mem::replace(&mut self.transaction, self.pool.begin().await?);
transaction.commit().await?;

while let Some(message) = self.publish_queue.pop_front() {
send_broker_message(message);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider asynchronous message publishing for improved performance.

Ohayo, sensei! To potentially improve performance, especially under heavy load, consider publishing broker messages asynchronously. This approach can prevent blocking the main execution flow while messages are being published.

Here's a suggestion for modifying the loop:

use futures::future::join_all;

// ...

let publish_futures: Vec<_> = self.publish_queue
    .drain(..)
    .map(|message| {
        tokio::spawn(async move {
            send_broker_message(message);
        })
    })
    .collect();

join_all(publish_futures).await;

This modification spawns a new task for each message, allowing them to be published concurrently. The join_all ensures that all messages are published before proceeding.

Larkooo marked this conversation as resolved.
Show resolved Hide resolved

Ok(())
}
}

fn send_broker_message(message: BrokerMessage) {
match message {
BrokerMessage::ModelRegistered(model) => SimpleBroker::publish(model),
BrokerMessage::EntityUpdated(entity) => SimpleBroker::publish(entity),
BrokerMessage::EventMessageUpdated(event) => SimpleBroker::publish(event),
BrokerMessage::EventEmitted(event) => SimpleBroker::publish(event),
}
}
2 changes: 1 addition & 1 deletion crates/torii/core/src/lib.rs
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
pub mod cache;
pub mod engine;
pub mod error;
pub mod executor;
pub mod model;
pub mod processors;
pub mod query_queue;
pub mod simple_broker;
pub mod sql;
pub mod types;
Expand Down
6 changes: 2 additions & 4 deletions crates/torii/core/src/processors/metadata_update.rs
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ where
uri = %uri_str,
"Resource metadata set."
);
db.set_metadata(resource, &uri_str, block_timestamp);
db.set_metadata(resource, &uri_str, block_timestamp)?;

let db = db.clone();
let resource = *resource;
Expand All @@ -83,9 +83,7 @@ where
async fn try_retrieve(mut db: Sql, resource: Felt, uri_str: String) {
match metadata(uri_str.clone()).await {
Ok((metadata, icon_img, cover_img)) => {
db.update_metadata(&resource, &uri_str, &metadata, &icon_img, &cover_img)
.await
.unwrap();
db.update_metadata(&resource, &uri_str, &metadata, &icon_img, &cover_img).unwrap();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ohayo, sensei! We might want to reconsider this change.

The modification to db.update_metadata(&resource, &uri_str, &metadata, &icon_img, &cover_img).unwrap(); raises some concerns:

  1. Removing await changes this from an asynchronous to a synchronous call. Is this intentional? If update_metadata is still an async function, this could lead to unexpected behavior.

  2. Removing unwrap() without replacing it with proper error handling could lead to silent failures. We're no longer explicitly handling potential errors from this operation.

Consider the following improvements:

match db.update_metadata(&resource, &uri_str, &metadata, &icon_img, &cover_img).await {
    Ok(_) => info!(
        target: LOG_TARGET,
        resource = %format!("{:#x}", resource),
        "Updated resource metadata from ipfs."
    ),
    Err(e) => error!(
        target: LOG_TARGET,
        resource = %format!("{:#x}", resource),
        error = %e,
        "Failed to update resource metadata from ipfs."
    ),
}

This approach maintains the asynchronous nature of the call and provides proper error handling, logging any issues that might occur during the update process.

info!(
target: LOG_TARGET,
resource = %format!("{:#x}", resource),
Expand Down
2 changes: 1 addition & 1 deletion crates/torii/core/src/processors/store_transaction.rs
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ impl<P: Provider + Sync + std::fmt::Debug> TransactionProcessor<P> for StoreTran
transaction: &Transaction,
) -> Result<(), Error> {
let transaction_id = format!("{:#064x}:{:#x}", block_number, transaction_hash);
db.store_transaction(transaction, &transaction_id, block_timestamp);
db.store_transaction(transaction, &transaction_id, block_timestamp)?;
Ok(())
}
}
Loading